Compare commits

..

1 Commits

Author SHA1 Message Date
c19bc9e91e temporary commit 2025-02-07 10:18:54 +08:00
48 changed files with 606 additions and 1067 deletions

20
LICENSE Normal file
View File

@@ -0,0 +1,20 @@
The MIT License (MIT)
Copyright (c) 2017 Y. T. CHUNG <zonyitoo@gmail.com>
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

110
README.md
View File

@@ -0,0 +1,110 @@
# Remote Build Cache Server
Remote Build Cache Server (shortened to RBCS) allows you to share and reuse unchanged build
and test outputs across the team. This speeds up local and CI builds since cycles are not wasted
re-building components that are unaffected by new code changes. RBCS supports both Gradle and
Maven build tool environments.
## Getting Started
### Downloading the jar file
You can download the latest version from [this link](https://gitea.woggioni.net/woggioni/-/packages/maven/net.woggioni-rbcs-cli/)
If you want to use memcache as a storage backend you'll also need to download [the memcache plugin](https://gitea.woggioni.net/woggioni/-/packages/maven/net.woggioni-rbcs-server-memcache/)
### Using the Docker image
You can pull the latest Docker image with
```bash
docker pull gitea.woggioni.net/woggioni/rbcs:latest
```
## Usage
## Configuration
### Using RBCS with Gradle
```groovy
buildCache {
remote(HttpBuildCache) {
url = 'https://rbcs.example.com/'
}
}
```
### Using RBCS with Maven
Read [here](https://maven.apache.org/extensions/maven-build-cache-extension/remote-cache.html)
## FAQ
### Why should I use a build cache?
#### Build Caches Improve Build & Test Performance
Building software consists of a number of steps, like compiling sources, executing tests, and linking binaries. Weve seen that a binary artifact repository helps when such a step requires an external component by downloading the artifact from the repository rather than building it locally.
However, there are many additional steps in this build process which can be optimized to reduce the build time. An obvious strategy is to avoid executing build steps which dominate the total build time when these build steps are not needed.
Most build times are dominated by the testing step.
While binary repositories cannot capture the outcome of a test build step (only the test reports
when included in binary artifacts), build caches are designed to eliminate redundant executions
for every build step. Moreover, it generalizes the concept of avoiding work associated with any
incremental step of the build, including test execution, compilation and resource processing.
The mechanism itself is comparable to a pure function. That is, given some inputs such as source
files and environment parameters we know that the output is always going to be the same.
As a result, we can cache it and retrieve it based on a simple cryptographic hash of the inputs.
Build caching is supported natively by some build tools.
#### Improve CI builds with a remote build cache
When analyzing the role of a build cache it is important to take into account the granularity
of the changes that it caches. Imagine a full build for a project with 40 to 50 modules
which fails at the last step (deployment) because the staging environment is temporarily unavailable.
Although the vast majority of the build steps (potentially thousands) succeed,
the change can not be deployed to the staging environment.
Without a build cache one typically relies on a very complex CI configuration to reuse build step outputs
or would have to repeat the full build once the environment is available.
Some build tools dont support incremental builds properly. For example, outputs of a build started
from scratch may vary when compared to subsequent builds that rely on the initial builds output.
As a result, to preserve build integrity, its crucial to rebuild from scratch, or cleanly, in this
scenario.
With a build cache, only the last step needs to be executed and the build can be re-triggered
when the environment is back online. This automatically saves all of the time and
resources required across the different build steps which were successfully executed.
Instead of executing the intermediate steps, the build tool pulls the outputs from the build cache,
avoiding a lot of redundant work
#### Share outputs with a remote build cache
One of the most important advantages of a remote build cache is the ability to share build outputs.
In most CI configurations, for example, a number of pipelines are created.
These may include one for building the sources, one for testing, one for publishing the outcomes
to a remote repository, and other pipelines to test on different platforms.
There are even situations where CI builds partially build a project (i.e. some modules and not others).
Most of those pipelines share a lot of intermediate build steps. All builds which perform testing
require the binaries to be ready. All publishing builds require all previous steps to be executed.
And because modern CI infrastructure means executing everything in containerized (isolated) environments,
significant resources are wasted by repeatedly building the same intermediate artifacts.
A remote build cache greatly reduces this overhead by orders of magnitudes because it provides a way
for all those pipelines to share their outputs. After all, there is no point recreating an output that
is already available in the cache.
Because there are inherent dependencies between software components of a build,
introducing a build cache dramatically reduces the impact of exploding a component into multiple pieces,
allowing for increased modularity without increased overhead.
#### Make local developers more efficient with remote build caches
It is common for different teams within a company to work on different modules of a single large
application. In this case, most teams dont care about building the other parts of the software.
By introducing a remote cache developers immediately benefit from pre-built artifacts when checking out code.
Because it has already been built on CI, they dont have to do it locally.
Introducing a remote cache is a huge benefit for those developers. Consider that a typical developers
day begins by performing a code checkout. Most likely the checked out code has already been built on CI.
Therefore, no time is wasted running the first build of the day. The remote cache provides all of the
intermediate artifacts needed. And, in the event local changes are made, the remote cache still leverages
partial cache hits for projects which are independent. As other developers in the organization request
CI builds, the remote cache continues to populate, increasing the likelihood of these remote cache hits
across team members.

View File

@@ -2,10 +2,11 @@ org.gradle.configuration-cache=false
org.gradle.parallel=true
org.gradle.caching=true
rbcs.version = 0.1.6
rbcs.version = 0.1.4
lys.version = 2025.02.08
lys.version = 2025.02.05
gitea.maven.url = https://gitea.woggioni.net/api/packages/woggioni/maven
docker.registry.url=gitea.woggioni.net
jpms-check.configurationName = runtimeClasspath

View File

@@ -4,5 +4,4 @@ module net.woggioni.rbcs.api {
requires io.netty.buffer;
exports net.woggioni.rbcs.api;
exports net.woggioni.rbcs.api.exception;
exports net.woggioni.rbcs.api.event;
}

View File

@@ -1,17 +1,14 @@
package net.woggioni.rbcs.api;
import io.netty.buffer.ByteBufAllocator;
import io.netty.buffer.ByteBuf;
import net.woggioni.rbcs.api.exception.ContentTooLargeException;
import java.nio.channels.ReadableByteChannel;
import java.util.concurrent.CompletableFuture;
public interface Cache extends AutoCloseable {
CompletableFuture<ReadableByteChannel> get(String key);
default void get(String key, ResponseHandle responseHandle, ByteBufAllocator alloc) {
throw new UnsupportedOperationException();
}
default CompletableFuture<RequestHandle> put(String key, ResponseHandle responseHandle, ByteBufAllocator alloc) {
throw new UnsupportedOperationException();
}
CompletableFuture<Void> put(String key, ByteBuf content) throws ContentTooLargeException;
}

View File

@@ -1,8 +0,0 @@
package net.woggioni.rbcs.api;
import net.woggioni.rbcs.api.event.RequestStreamingEvent;
@FunctionalInterface
public interface RequestHandle {
void handleEvent(RequestStreamingEvent evt);
}

View File

@@ -1,8 +0,0 @@
package net.woggioni.rbcs.api;
import net.woggioni.rbcs.api.event.ResponseStreamingEvent;
@FunctionalInterface
public interface ResponseHandle {
void handleEvent(ResponseStreamingEvent evt);
}

View File

@@ -1,26 +0,0 @@
package net.woggioni.rbcs.api.event;
import io.netty.buffer.ByteBuf;
import lombok.Getter;
import lombok.RequiredArgsConstructor;
public sealed interface RequestStreamingEvent {
@Getter
@RequiredArgsConstructor
non-sealed class ChunkReceived implements RequestStreamingEvent {
private final ByteBuf chunk;
}
final class LastChunkReceived extends ChunkReceived {
public LastChunkReceived(ByteBuf chunk) {
super(chunk);
}
}
@Getter
@RequiredArgsConstructor
final class ExceptionCaught implements RequestStreamingEvent {
private final Throwable exception;
}
}

View File

@@ -1,42 +0,0 @@
package net.woggioni.rbcs.api.event;
import io.netty.buffer.ByteBuf;
import lombok.Getter;
import lombok.RequiredArgsConstructor;
import java.nio.channels.FileChannel;
public sealed interface ResponseStreamingEvent {
final class ResponseReceived implements ResponseStreamingEvent {
}
@Getter
@RequiredArgsConstructor
non-sealed class ChunkReceived implements ResponseStreamingEvent {
private final ByteBuf chunk;
}
@Getter
@RequiredArgsConstructor
non-sealed class FileReceived implements ResponseStreamingEvent {
private final FileChannel file;
}
final class LastChunkReceived extends ChunkReceived {
public LastChunkReceived(ByteBuf chunk) {
super(chunk);
}
}
@Getter
@RequiredArgsConstructor
final class ExceptionCaught implements ResponseStreamingEvent {
private final Throwable exception;
}
final class NotFound implements ResponseStreamingEvent { }
NotFound NOT_FOUND = new NotFound();
ResponseReceived RESPONSE_RECEIVED = new ResponseReceived();
}

View File

@@ -44,6 +44,7 @@ envelopeJar {
dependencies {
implementation catalog.jwo
implementation catalog.slf4j.api
implementation catalog.netty.codec.http
implementation catalog.picocli
implementation project(':rbcs-client')

View File

@@ -6,8 +6,6 @@ import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.error
import net.woggioni.rbcs.common.info
import net.woggioni.jwo.JWO
import net.woggioni.jwo.LongMath
import net.woggioni.rbcs.common.debug
import picocli.CommandLine
import java.security.SecureRandom
import java.time.Duration
@@ -48,7 +46,6 @@ class BenchmarkCommand : RbcsCommand() {
clientCommand.configuration.profiles[profileName]
?: throw IllegalArgumentException("Profile $profileName does not exist in configuration")
}
val progressThreshold = LongMath.ceilDiv(numberOfEntries.toLong(), 20)
RemoteBuildCacheClient(profile).use { client ->
val entryGenerator = sequence {
@@ -82,12 +79,7 @@ class BenchmarkCommand : RbcsCommand() {
completionQueue.put(result)
}
semaphore.release()
val completed = completionCounter.incrementAndGet()
if(completed.mod(progressThreshold) == 0L) {
log.debug {
"Inserted $completed / $numberOfEntries"
}
}
completionCounter.incrementAndGet()
}
} else {
Thread.sleep(0)
@@ -129,12 +121,7 @@ class BenchmarkCommand : RbcsCommand() {
}
}
future.whenComplete { _, _ ->
val completed = completionCounter.incrementAndGet()
if(completed.mod(progressThreshold) == 0L) {
log.debug {
"Retrieved $completed / ${entries.size}"
}
}
completionCounter.incrementAndGet()
semaphore.release()
}
} else {

View File

@@ -6,11 +6,9 @@ plugins {
dependencies {
implementation project(':rbcs-api')
implementation project(':rbcs-common')
implementation catalog.picocli
implementation catalog.slf4j.api
implementation catalog.netty.buffer
implementation catalog.netty.handler
implementation catalog.netty.transport
implementation catalog.netty.common
implementation catalog.netty.codec.http
testRuntimeOnly catalog.logback.classic

View File

@@ -45,9 +45,9 @@ import java.time.Duration
import java.util.Base64
import java.util.concurrent.CompletableFuture
import java.util.concurrent.atomic.AtomicInteger
import kotlin.random.Random
import io.netty.util.concurrent.Future as NettyFuture
class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoCloseable {
private val group: NioEventLoopGroup
private var sslContext: SslContext
@@ -206,7 +206,6 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
retryPolicy.initialDelayMillis.toDouble(),
retryPolicy.exp,
outcomeHandler,
Random.Default,
operation
)
} else {

View File

@@ -3,8 +3,6 @@ package net.woggioni.rbcs.client
import io.netty.util.concurrent.EventExecutorGroup
import java.util.concurrent.CompletableFuture
import java.util.concurrent.TimeUnit
import kotlin.math.pow
import kotlin.random.Random
sealed class OperationOutcome<T> {
class Success<T>(val result: T) : OperationOutcome<T>()
@@ -26,10 +24,8 @@ fun <T> executeWithRetry(
initialDelay: Double,
exp: Double,
outcomeHandler: OutcomeHandler<T>,
randomizer : Random?,
cb: () -> CompletableFuture<T>
): CompletableFuture<T> {
val finalResult = cb()
var future = finalResult
var shortCircuit = false
@@ -50,7 +46,7 @@ fun <T> executeWithRetry(
is OutcomeHandlerResult.Retry -> {
val res = CompletableFuture<T>()
val delay = run {
val scheduledDelay = (initialDelay * exp.pow(i.toDouble()) * (1.0 + (randomizer?.nextDouble(-0.5, 0.5) ?: 0.0))).toLong()
val scheduledDelay = (initialDelay * Math.pow(exp, i.toDouble())).toLong()
outcomeHandlerResult.suggestedDelayMillis?.coerceAtMost(scheduledDelay) ?: scheduledDelay
}
eventExecutorGroup.schedule({

View File

@@ -89,7 +89,7 @@ class RetryTest {
val random = Random(testArgs.seed)
val future =
executeWithRetry(executor, testArgs.maxAttempt, testArgs.initialDelay, testArgs.exp, outcomeHandler, null) {
executeWithRetry(executor, testArgs.maxAttempt, testArgs.initialDelay, testArgs.exp, outcomeHandler) {
val now = System.nanoTime()
val result = CompletableFuture<Int>()
executor.submit {
@@ -129,7 +129,7 @@ class RetryTest {
previousAttempt.first + testArgs.initialDelay * Math.pow(testArgs.exp, index.toDouble()) * 1e6
val actualTimestamp = timestamp
val err = Math.abs(expectedTimestamp - actualTimestamp) / expectedTimestamp
Assertions.assertTrue(err < 1e-2)
Assertions.assertTrue(err < 1e-3)
}
if (index == attempts.size - 1 && index < testArgs.maxAttempt - 1) {
/*

View File

@@ -1,15 +0,0 @@
package net.woggioni.rbcs.common
import io.netty.buffer.ByteBuf
import io.netty.buffer.ByteBufAllocator
import io.netty.buffer.CompositeByteBuf
fun extractChunk(buf: CompositeByteBuf, alloc: ByteBufAllocator): ByteBuf {
val chunk = alloc.compositeBuffer()
for (component in buf.decompose(0, buf.readableBytes())) {
chunk.addComponent(true, component.retain())
}
buf.removeComponents(0, buf.numComponents())
buf.clear()
return chunk
}

View File

@@ -12,24 +12,6 @@ object RBCS {
const val RBCS_PREFIX: String = "rbcs"
const val XML_SCHEMA_NAMESPACE_URI = "http://www.w3.org/2001/XMLSchema-instance"
fun ByteArray.toInt(index : Int = 0) : Long {
if(index + 4 > size) throw IllegalArgumentException("Not enough bytes to decode a 32 bits integer")
var value : Long = 0
for (b in index until index + 4) {
value = (value shl 8) + (get(b).toInt() and 0xFF)
}
return value
}
fun ByteArray.toLong(index : Int = 0) : Long {
if(index + 8 > size) throw IllegalArgumentException("Not enough bytes to decode a 64 bits long integer")
var value : Long = 0
for (b in index until index + 8) {
value = (value shl 8) + (get(b).toInt() and 0xFF)
}
return value
}
fun digest(
data: ByteArray,
md: MessageDigest = MessageDigest.getInstance("MD5")

View File

@@ -34,7 +34,6 @@ dependencies {
implementation catalog.jwo
implementation catalog.slf4j.api
implementation catalog.netty.common
implementation catalog.netty.handler
implementation catalog.netty.codec.memcache
bundle catalog.netty.codec.memcache

View File

@@ -11,7 +11,6 @@ module net.woggioni.rbcs.server.memcache {
requires io.netty.codec.memcache;
requires io.netty.common;
requires io.netty.buffer;
requires io.netty.handler;
requires org.slf4j;
provides CacheProvider with net.woggioni.rbcs.server.memcache.MemcacheCacheProvider;

View File

@@ -1,232 +1,20 @@
package net.woggioni.rbcs.server.memcache
import io.netty.buffer.ByteBufAllocator
import io.netty.buffer.Unpooled
import io.netty.handler.codec.memcache.binary.BinaryMemcacheOpcodes
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponseStatus
import io.netty.handler.codec.memcache.binary.DefaultBinaryMemcacheRequest
import io.netty.buffer.ByteBuf
import net.woggioni.rbcs.api.Cache
import net.woggioni.rbcs.api.RequestHandle
import net.woggioni.rbcs.api.ResponseHandle
import net.woggioni.rbcs.api.event.RequestStreamingEvent
import net.woggioni.rbcs.api.event.ResponseStreamingEvent
import net.woggioni.rbcs.api.exception.ContentTooLargeException
import net.woggioni.rbcs.common.ByteBufOutputStream
import net.woggioni.rbcs.common.RBCS.digest
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.debug
import net.woggioni.rbcs.common.extractChunk
import net.woggioni.rbcs.server.memcache.client.MemcacheClient
import net.woggioni.rbcs.server.memcache.client.MemcacheResponseHandle
import net.woggioni.rbcs.server.memcache.client.StreamingRequestEvent
import net.woggioni.rbcs.server.memcache.client.StreamingResponseEvent
import java.security.MessageDigest
import java.time.Duration
import java.time.Instant
import java.nio.channels.ReadableByteChannel
import java.util.concurrent.CompletableFuture
import java.util.zip.Deflater
import java.util.zip.DeflaterOutputStream
import java.util.zip.Inflater
import java.util.zip.InflaterOutputStream
class MemcacheCache(private val cfg: MemcacheCacheConfiguration) : Cache {
companion object {
@JvmStatic
private val log = contextLogger()
}
class MemcacheCache(private val cfg : MemcacheCacheConfiguration) : Cache {
private val memcacheClient = MemcacheClient(cfg)
override fun get(key: String, responseHandle: ResponseHandle, alloc: ByteBufAllocator) {
val compressionMode = cfg.compressionMode
val buf = alloc.compositeBuffer()
val stream = ByteBufOutputStream(buf).let { outputStream ->
if (compressionMode != null) {
when (compressionMode) {
MemcacheCacheConfiguration.CompressionMode.DEFLATE -> {
InflaterOutputStream(
outputStream,
Inflater()
)
}
}
} else {
outputStream
}
}
val memcacheResponseHandle = object : MemcacheResponseHandle {
override fun handleEvent(evt: StreamingResponseEvent) {
when (evt) {
is StreamingResponseEvent.ResponseReceived -> {
if (evt.response.status() == BinaryMemcacheResponseStatus.SUCCESS) {
responseHandle.handleEvent(ResponseStreamingEvent.RESPONSE_RECEIVED)
} else if (evt.response.status() == BinaryMemcacheResponseStatus.KEY_ENOENT) {
responseHandle.handleEvent(ResponseStreamingEvent.NOT_FOUND)
} else {
responseHandle.handleEvent(ResponseStreamingEvent.ExceptionCaught(MemcacheException(evt.response.status())))
}
}
is StreamingResponseEvent.LastContentReceived -> {
evt.content.content().let { content ->
content.readBytes(stream, content.readableBytes())
}
buf.retain()
stream.close()
val chunk = extractChunk(buf, alloc)
buf.release()
responseHandle.handleEvent(
ResponseStreamingEvent.LastChunkReceived(
chunk
)
)
}
is StreamingResponseEvent.ContentReceived -> {
evt.content.content().let { content ->
content.readBytes(stream, content.readableBytes())
}
if (buf.readableBytes() >= cfg.chunkSize) {
val chunk = extractChunk(buf, alloc)
responseHandle.handleEvent(ResponseStreamingEvent.ChunkReceived(chunk))
}
}
is StreamingResponseEvent.ExceptionCaught -> {
responseHandle.handleEvent(ResponseStreamingEvent.ExceptionCaught(evt.exception))
}
}
}
}
memcacheClient.sendRequest(Unpooled.wrappedBuffer(key.toByteArray()), memcacheResponseHandle)
.thenApply { memcacheRequestHandle ->
val request = (cfg.digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digest(key.toByteArray(), md)
} ?: key.toByteArray(Charsets.UTF_8)
).let { digest ->
DefaultBinaryMemcacheRequest(Unpooled.wrappedBuffer(digest)).apply {
setOpcode(BinaryMemcacheOpcodes.GET)
}
}
memcacheRequestHandle.handleEvent(StreamingRequestEvent.SendRequest(request))
}.exceptionally { ex ->
responseHandle.handleEvent(ResponseStreamingEvent.ExceptionCaught(ex))
}
override fun get(key: String): CompletableFuture<ReadableByteChannel?> {
return memcacheClient.get(key)
}
private fun encodeExpiry(expiry: Duration): Int {
val expirySeconds = expiry.toSeconds()
return expirySeconds.toInt().takeIf { it.toLong() == expirySeconds }
?: Instant.ofEpochSecond(expirySeconds).epochSecond.toInt()
}
override fun put(
key: String,
responseHandle: ResponseHandle,
alloc: ByteBufAllocator
): CompletableFuture<RequestHandle> {
val memcacheResponseHandle = object : MemcacheResponseHandle {
override fun handleEvent(evt: StreamingResponseEvent) {
when (evt) {
is StreamingResponseEvent.ResponseReceived -> {
when (evt.response.status()) {
BinaryMemcacheResponseStatus.SUCCESS -> {
responseHandle.handleEvent(ResponseStreamingEvent.RESPONSE_RECEIVED)
}
BinaryMemcacheResponseStatus.KEY_ENOENT -> {
responseHandle.handleEvent(ResponseStreamingEvent.NOT_FOUND)
}
BinaryMemcacheResponseStatus.E2BIG -> {
responseHandle.handleEvent(
ResponseStreamingEvent.ExceptionCaught(
ContentTooLargeException("Request payload is too big", null)
)
)
}
else -> {
responseHandle.handleEvent(ResponseStreamingEvent.ExceptionCaught(MemcacheException(evt.response.status())))
}
}
}
is StreamingResponseEvent.LastContentReceived -> {
responseHandle.handleEvent(
ResponseStreamingEvent.LastChunkReceived(
evt.content.content().retain()
)
)
}
is StreamingResponseEvent.ContentReceived -> {
responseHandle.handleEvent(ResponseStreamingEvent.ChunkReceived(evt.content.content().retain()))
}
is StreamingResponseEvent.ExceptionCaught -> {
responseHandle.handleEvent(ResponseStreamingEvent.ExceptionCaught(evt.exception))
}
}
}
}
val result: CompletableFuture<RequestHandle> =
memcacheClient.sendRequest(Unpooled.wrappedBuffer(key.toByteArray()), memcacheResponseHandle)
.thenApply { memcacheRequestHandle ->
val request = (cfg.digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digest(key.toByteArray(), md)
} ?: key.toByteArray(Charsets.UTF_8)).let { digest ->
val extras = Unpooled.buffer(8, 8)
extras.writeInt(0)
extras.writeInt(encodeExpiry(cfg.maxAge))
DefaultBinaryMemcacheRequest(Unpooled.wrappedBuffer(digest), extras).apply {
setOpcode(BinaryMemcacheOpcodes.SET)
}
}
// memcacheRequestHandle.handleEvent(StreamingRequestEvent.SendRequest(request))
val compressionMode = cfg.compressionMode
val buf = alloc.heapBuffer()
val stream = ByteBufOutputStream(buf).let { outputStream ->
if (compressionMode != null) {
when (compressionMode) {
MemcacheCacheConfiguration.CompressionMode.DEFLATE -> {
DeflaterOutputStream(
outputStream,
Deflater(Deflater.DEFAULT_COMPRESSION, false)
)
}
}
} else {
outputStream
}
}
RequestHandle { evt ->
when (evt) {
is RequestStreamingEvent.LastChunkReceived -> {
evt.chunk.readBytes(stream, evt.chunk.readableBytes())
buf.retain()
stream.close()
request.setTotalBodyLength(buf.readableBytes() + request.keyLength() + request.extrasLength())
memcacheRequestHandle.handleEvent(StreamingRequestEvent.SendRequest(request))
memcacheRequestHandle.handleEvent(StreamingRequestEvent.SendLastChunk(buf))
}
is RequestStreamingEvent.ChunkReceived -> {
evt.chunk.readBytes(stream, evt.chunk.readableBytes())
}
is RequestStreamingEvent.ExceptionCaught -> {
stream.close()
}
}
}
}
return result
override fun put(key: String, content: ByteBuf): CompletableFuture<Void> {
return memcacheClient.put(key, content, cfg.maxAge)
}
override fun close() {

View File

@@ -10,10 +10,14 @@ data class MemcacheCacheConfiguration(
val maxSize: Int = 0x100000,
val digestAlgorithm: String? = null,
val compressionMode: CompressionMode? = null,
val chunkSize : Int
) : Configuration.Cache {
enum class CompressionMode {
/**
* Gzip mode
*/
GZIP,
/**
* Deflate mode
*/

View File

@@ -29,14 +29,12 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
?.let(Duration::parse)
?: Duration.ofDays(1)
val maxSize = el.renderAttribute("max-size")
?.let(Integer::decode)
?.let(String::toInt)
?: 0x100000
val chunkSize = el.renderAttribute("chunk-size")
?.let(Integer::decode)
?: 0x4000
val compressionMode = el.renderAttribute("compression-mode")
?.let {
when (it) {
"gzip" -> MemcacheCacheConfiguration.CompressionMode.GZIP
"deflate" -> MemcacheCacheConfiguration.CompressionMode.DEFLATE
else -> MemcacheCacheConfiguration.CompressionMode.DEFLATE
}
@@ -65,7 +63,6 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
maxSize,
digestAlgorithm,
compressionMode,
chunkSize
)
}
@@ -73,6 +70,7 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
val result = doc.createElement("cache")
Xml.of(doc, result) {
attr("xmlns:${xmlNamespacePrefix}", xmlNamespace, namespaceURI = "http://www.w3.org/2000/xmlns/")
attr("xs:type", "${xmlNamespacePrefix}:$xmlType", RBCS.XML_SCHEMA_NAMESPACE_URI)
for (server in servers) {
node("server") {
@@ -86,13 +84,13 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
}
attr("max-age", maxAge.toString())
attr("max-size", maxSize.toString())
attr("chunk-size", chunkSize.toString())
digestAlgorithm?.let { digestAlgorithm ->
attr("digest", digestAlgorithm)
}
compressionMode?.let { compressionMode ->
attr(
"compression-mode", when (compressionMode) {
MemcacheCacheConfiguration.CompressionMode.GZIP -> "gzip"
MemcacheCacheConfiguration.CompressionMode.DEFLATE -> "deflate"
}
)

View File

@@ -1,30 +0,0 @@
package net.woggioni.rbcs.server.memcache.client
import io.netty.buffer.ByteBuf
import io.netty.handler.codec.memcache.LastMemcacheContent
import io.netty.handler.codec.memcache.MemcacheContent
import io.netty.handler.codec.memcache.binary.BinaryMemcacheRequest
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponse
sealed interface StreamingRequestEvent {
class SendRequest(val request : BinaryMemcacheRequest) : StreamingRequestEvent
open class SendChunk(val chunk : ByteBuf) : StreamingRequestEvent
class SendLastChunk(chunk : ByteBuf) : SendChunk(chunk)
class ExceptionCaught(val exception : Throwable) : StreamingRequestEvent
}
sealed interface StreamingResponseEvent {
class ResponseReceived(val response : BinaryMemcacheResponse) : StreamingResponseEvent
open class ContentReceived(val content : MemcacheContent) : StreamingResponseEvent
class LastContentReceived(val lastContent : LastMemcacheContent) : ContentReceived(lastContent)
class ExceptionCaught(val exception : Throwable) : StreamingResponseEvent
}
interface MemcacheRequestHandle {
fun handleEvent(evt : StreamingRequestEvent)
}
interface MemcacheResponseHandle {
fun handleEvent(evt : StreamingResponseEvent)
}

View File

@@ -3,6 +3,7 @@ package net.woggioni.rbcs.server.memcache.client
import io.netty.bootstrap.Bootstrap
import io.netty.buffer.ByteBuf
import io.netty.buffer.Unpooled
import io.netty.channel.Channel
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.ChannelOption
@@ -13,23 +14,36 @@ import io.netty.channel.pool.AbstractChannelPoolHandler
import io.netty.channel.pool.ChannelPool
import io.netty.channel.pool.FixedChannelPool
import io.netty.channel.socket.nio.NioSocketChannel
import io.netty.handler.codec.memcache.DefaultLastMemcacheContent
import io.netty.handler.codec.memcache.DefaultMemcacheContent
import io.netty.handler.codec.memcache.LastMemcacheContent
import io.netty.handler.codec.memcache.MemcacheContent
import io.netty.handler.codec.memcache.MemcacheObject
import io.netty.handler.codec.DecoderException
import io.netty.handler.codec.memcache.binary.BinaryMemcacheClientCodec
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponse
import io.netty.handler.logging.LoggingHandler
import io.netty.handler.codec.memcache.binary.BinaryMemcacheObjectAggregator
import io.netty.handler.codec.memcache.binary.BinaryMemcacheOpcodes
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponseStatus
import io.netty.handler.codec.memcache.binary.DefaultFullBinaryMemcacheRequest
import io.netty.handler.codec.memcache.binary.FullBinaryMemcacheRequest
import io.netty.handler.codec.memcache.binary.FullBinaryMemcacheResponse
import io.netty.util.concurrent.GenericFutureListener
import net.woggioni.rbcs.common.ByteBufInputStream
import net.woggioni.rbcs.common.ByteBufOutputStream
import net.woggioni.rbcs.common.RBCS.digest
import net.woggioni.rbcs.common.HostAndPort
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.debug
import net.woggioni.rbcs.server.memcache.MemcacheCacheConfiguration
import net.woggioni.rbcs.server.memcache.MemcacheException
import net.woggioni.jwo.JWO
import java.net.InetSocketAddress
import java.nio.channels.Channels
import java.nio.channels.ReadableByteChannel
import java.security.MessageDigest
import java.time.Duration
import java.time.Instant
import java.util.concurrent.CompletableFuture
import java.util.concurrent.ConcurrentHashMap
import java.util.concurrent.atomic.AtomicLong
import java.util.zip.Deflater
import java.util.zip.DeflaterOutputStream
import java.util.zip.GZIPInputStream
import java.util.zip.GZIPOutputStream
import java.util.zip.InflaterInputStream
import io.netty.util.concurrent.Future as NettyFuture
@@ -47,8 +61,6 @@ class MemcacheClient(private val cfg: MemcacheCacheConfiguration) : AutoCloseabl
group = NioEventLoopGroup()
}
private val counter = AtomicLong(0)
private fun newConnectionPool(server: MemcacheCacheConfiguration.Server): FixedChannelPool {
val bootstrap = Bootstrap().apply {
group(group)
@@ -64,15 +76,18 @@ class MemcacheClient(private val cfg: MemcacheCacheConfiguration) : AutoCloseabl
override fun channelCreated(ch: Channel) {
val pipeline: ChannelPipeline = ch.pipeline()
pipeline.addLast(BinaryMemcacheClientCodec())
pipeline.addLast(LoggingHandler())
pipeline.addLast(BinaryMemcacheObjectAggregator(cfg.maxSize))
}
}
return FixedChannelPool(bootstrap, channelPoolHandler, server.maxConnections)
}
fun sendRequest(key: ByteBuf, responseHandle: MemcacheResponseHandle): CompletableFuture<MemcacheRequestHandle> {
private fun sendRequest(request: FullBinaryMemcacheRequest): CompletableFuture<FullBinaryMemcacheResponse> {
val server = cfg.servers.let { servers ->
if (servers.size > 1) {
val key = request.key().duplicate()
var checksum = 0
while (key.readableBytes() > 4) {
val byte = key.readInt()
@@ -88,7 +103,7 @@ class MemcacheClient(private val cfg: MemcacheCacheConfiguration) : AutoCloseabl
}
}
val response = CompletableFuture<MemcacheRequestHandle>()
val response = CompletableFuture<FullBinaryMemcacheResponse>()
// Custom handler for processing responses
val pool = connectionPool.computeIfAbsent(server.endpoint) {
newConnectionPool(server)
@@ -98,73 +113,31 @@ class MemcacheClient(private val cfg: MemcacheCacheConfiguration) : AutoCloseabl
if (channelFuture.isSuccess) {
val channel = channelFuture.now
val pipeline = channel.pipeline()
val handler = object : SimpleChannelInboundHandler<MemcacheObject>() {
override fun channelRead0(
ctx: ChannelHandlerContext,
msg: MemcacheObject
) {
when (msg) {
is BinaryMemcacheResponse -> responseHandle.handleEvent(
StreamingResponseEvent.ResponseReceived(
msg
)
)
is LastMemcacheContent -> {
responseHandle.handleEvent(
StreamingResponseEvent.LastContentReceived(
msg
)
)
pipeline.removeLast()
pool.release(channel)
}
is MemcacheContent -> responseHandle.handleEvent(
StreamingResponseEvent.ContentReceived(
msg
)
)
}
}
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
responseHandle.handleEvent(StreamingResponseEvent.ExceptionCaught(cause))
ctx.close()
pipeline.removeLast()
pool.release(channel)
}
}
channel.pipeline()
.addLast("client-handler", handler)
response.complete(object : MemcacheRequestHandle {
override fun handleEvent(evt: StreamingRequestEvent) {
when (evt) {
is StreamingRequestEvent.SendRequest -> {
channel.writeAndFlush(evt.request)
}
is StreamingRequestEvent.SendLastChunk -> {
channel.writeAndFlush(DefaultLastMemcacheContent(evt.chunk))
val value = counter.incrementAndGet()
log.debug {
"Finished request counter: $value"
}
}
is StreamingRequestEvent.SendChunk -> {
channel.writeAndFlush(DefaultMemcacheContent(evt.chunk))
}
is StreamingRequestEvent.ExceptionCaught -> {
responseHandle.handleEvent(StreamingResponseEvent.ExceptionCaught(evt.exception))
channel.close()
pipeline.removeLast()
pool.release(channel)
}
.addLast("client-handler", object : SimpleChannelInboundHandler<FullBinaryMemcacheResponse>() {
override fun channelRead0(
ctx: ChannelHandlerContext,
msg: FullBinaryMemcacheResponse
) {
pipeline.removeLast()
pool.release(channel)
msg.touch("The method's caller must remember to release this")
response.complete(msg.retain())
}
}
})
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
val ex = when (cause) {
is DecoderException -> cause.cause!!
else -> cause
}
ctx.close()
pipeline.removeLast()
pool.release(channel)
response.completeExceptionally(ex)
}
})
request.touch()
channel.writeAndFlush(request)
} else {
response.completeExceptionally(channelFuture.cause())
}
@@ -173,6 +146,107 @@ class MemcacheClient(private val cfg: MemcacheCacheConfiguration) : AutoCloseabl
return response
}
private fun encodeExpiry(expiry: Duration): Int {
val expirySeconds = expiry.toSeconds()
return expirySeconds.toInt().takeIf { it.toLong() == expirySeconds }
?: Instant.ofEpochSecond(expirySeconds).epochSecond.toInt()
}
fun get(key: String): CompletableFuture<ReadableByteChannel?> {
val request = (cfg.digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digest(key.toByteArray(), md)
} ?: key.toByteArray(Charsets.UTF_8)).let { digest ->
DefaultFullBinaryMemcacheRequest(Unpooled.wrappedBuffer(digest), null).apply {
setOpcode(BinaryMemcacheOpcodes.GET)
}
}
return sendRequest(request).thenApply { response ->
try {
when (val status = response.status()) {
BinaryMemcacheResponseStatus.SUCCESS -> {
val compressionMode = cfg.compressionMode
val content = response.content().retain()
content.touch()
if (compressionMode != null) {
when (compressionMode) {
MemcacheCacheConfiguration.CompressionMode.GZIP -> {
GZIPInputStream(ByteBufInputStream(content))
}
MemcacheCacheConfiguration.CompressionMode.DEFLATE -> {
InflaterInputStream(ByteBufInputStream(content))
}
}
} else {
ByteBufInputStream(content)
}.let(Channels::newChannel)
}
BinaryMemcacheResponseStatus.KEY_ENOENT -> {
null
}
else -> throw MemcacheException(status)
}
} finally {
response.release()
}
}
}
fun put(key: String, content: ByteBuf, expiry: Duration, cas: Long? = null): CompletableFuture<Void> {
val request = (cfg.digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digest(key.toByteArray(), md)
} ?: key.toByteArray(Charsets.UTF_8)).let { digest ->
val extras = Unpooled.buffer(8, 8)
extras.writeInt(0)
extras.writeInt(encodeExpiry(expiry))
val compressionMode = cfg.compressionMode
content.retain()
val payload = if (compressionMode != null) {
val inputStream = ByteBufInputStream(content)
val buf = content.alloc().buffer()
buf.retain()
val outputStream = when (compressionMode) {
MemcacheCacheConfiguration.CompressionMode.GZIP -> {
GZIPOutputStream(ByteBufOutputStream(buf))
}
MemcacheCacheConfiguration.CompressionMode.DEFLATE -> {
DeflaterOutputStream(ByteBufOutputStream(buf), Deflater(Deflater.DEFAULT_COMPRESSION, false))
}
}
inputStream.use { i ->
outputStream.use { o ->
JWO.copy(i, o)
}
}
buf
} else {
content
}
DefaultFullBinaryMemcacheRequest(Unpooled.wrappedBuffer(digest), extras, payload).apply {
setOpcode(BinaryMemcacheOpcodes.SET)
cas?.let(this::setCas)
}
}
return sendRequest(request).thenApply { response ->
try {
when (val status = response.status()) {
BinaryMemcacheResponseStatus.SUCCESS -> null
else -> throw MemcacheException(status)
}
} finally {
response.release()
}
}
}
fun shutDown(): NettyFuture<*> {
return group.shutdownGracefully()
}

View File

@@ -20,8 +20,7 @@
<xs:element name="server" type="rbcs-memcache:memcacheServerType"/>
</xs:sequence>
<xs:attribute name="max-age" type="xs:duration" default="P1D"/>
<xs:attribute name="max-size" type="rbcs:byteSize" default="1048576"/>
<xs:attribute name="chunk-size" type="rbcs:byteSize" default="0x4000"/>
<xs:attribute name="max-size" type="xs:unsignedInt" default="1048576"/>
<xs:attribute name="digest" type="xs:token" />
<xs:attribute name="compression-mode" type="rbcs-memcache:compressionType"/>
</xs:extension>
@@ -31,6 +30,7 @@
<xs:simpleType name="compressionType">
<xs:restriction base="xs:token">
<xs:enumeration value="deflate"/>
<xs:enumeration value="gzip"/>
</xs:restriction>
</xs:simpleType>

View File

@@ -9,9 +9,6 @@ dependencies {
implementation catalog.jwo
implementation catalog.slf4j.api
implementation catalog.netty.codec.http
implementation catalog.netty.handler
implementation catalog.netty.buffer
implementation catalog.netty.transport
api project(':rbcs-common')
api project(':rbcs-api')
@@ -39,4 +36,3 @@ publishing {
}

View File

@@ -16,6 +16,7 @@ import io.netty.handler.codec.compression.CompressionOptions
import io.netty.handler.codec.http.DefaultHttpContent
import io.netty.handler.codec.http.HttpContentCompressor
import io.netty.handler.codec.http.HttpHeaderNames
import io.netty.handler.codec.http.HttpObjectAggregator
import io.netty.handler.codec.http.HttpRequest
import io.netty.handler.codec.http.HttpServerCodec
import io.netty.handler.ssl.ClientAuth
@@ -29,13 +30,11 @@ import io.netty.handler.timeout.IdleStateHandler
import io.netty.util.AttributeKey
import io.netty.util.concurrent.DefaultEventExecutorGroup
import io.netty.util.concurrent.EventExecutorGroup
import net.woggioni.jwo.JWO
import net.woggioni.jwo.Tuple2
import net.woggioni.rbcs.api.Configuration
import net.woggioni.rbcs.api.exception.ConfigurationException
import net.woggioni.rbcs.common.RBCS.toUrl
import net.woggioni.rbcs.common.PasswordSecurity.decodePasswordHash
import net.woggioni.rbcs.common.PasswordSecurity.hashPassword
import net.woggioni.rbcs.common.RBCS.toUrl
import net.woggioni.rbcs.common.Xml
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.debug
@@ -49,6 +48,8 @@ import net.woggioni.rbcs.server.configuration.Serializer
import net.woggioni.rbcs.server.exception.ExceptionHandler
import net.woggioni.rbcs.server.handler.ServerHandler
import net.woggioni.rbcs.server.throttling.ThrottlingHandler
import net.woggioni.jwo.JWO
import net.woggioni.jwo.Tuple2
import java.io.OutputStream
import java.net.InetSocketAddress
import java.nio.file.Files
@@ -58,7 +59,6 @@ import java.security.PrivateKey
import java.security.cert.X509Certificate
import java.util.Arrays
import java.util.Base64
import java.util.concurrent.Future
import java.util.concurrent.TimeUnit
import java.util.regex.Matcher
import java.util.regex.Pattern
@@ -128,12 +128,11 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
val clientCertificate = peerCertificates.first() as X509Certificate
val user = userExtractor?.extract(clientCertificate)
val group = groupExtractor?.extract(clientCertificate)
val allGroups =
((user?.groups ?: emptySet()).asSequence() + sequenceOf(group).filterNotNull()).toSet()
val allGroups = ((user?.groups ?: emptySet()).asSequence() + sequenceOf(group).filterNotNull()).toSet()
AuthenticationResult(user, allGroups)
} ?: anonymousUserGroups?.let { AuthenticationResult(null, it) }
} ?: anonymousUserGroups?.let{ AuthenticationResult(null, it) }
} catch (es: SSLPeerUnverifiedException) {
anonymousUserGroups?.let { AuthenticationResult(null, it) }
anonymousUserGroups?.let{ AuthenticationResult(null, it) }
}
}
}
@@ -192,7 +191,7 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
private class ServerInitializer(
private val cfg: Configuration,
private val eventExecutorGroup: EventExecutorGroup
) : ChannelInitializer<Channel>(), AutoCloseable {
) : ChannelInitializer<Channel>() {
companion object {
private fun createSslCtx(tls: Configuration.Tls): SslContext {
@@ -214,7 +213,7 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
trustManager(
ClientCertificateValidator.getTrustManager(ts, trustStore.isCheckCertificateStatus)
)
if (trustStore.isRequireClientCertificate) ClientAuth.REQUIRE
if(trustStore.isRequireClientCertificate) ClientAuth.REQUIRE
else ClientAuth.OPTIONAL
} ?: ClientAuth.NONE
clientAuth(clientAuth)
@@ -246,9 +245,14 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
private val log = contextLogger()
private val cache = cfg.cache.materialize()
private val serverHandler = let {
val cacheImplementation = cfg.cache.materialize()
val prefix = Path.of("/").resolve(Path.of(cfg.serverPath ?: "/"))
ServerHandler(cacheImplementation, prefix)
}
private val exceptionHandler = ExceptionHandler()
private val throttlingHandler = ThrottlingHandler(cfg)
private val authenticator = when (val auth = cfg.authentication) {
is Configuration.BasicAuthentication -> NettyHttpBasicAuthenticator(cfg.users, RoleAuthorizer())
@@ -307,7 +311,7 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
cfg.connection.also { conn ->
val readTimeout = conn.readTimeout.toMillis()
val writeTimeout = conn.writeTimeout.toMillis()
if (readTimeout > 0 || writeTimeout > 0) {
if(readTimeout > 0 || writeTimeout > 0) {
pipeline.addLast(
IdleStateHandler(
false,
@@ -321,7 +325,7 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
val readIdleTimeout = conn.readIdleTimeout.toMillis()
val writeIdleTimeout = conn.writeIdleTimeout.toMillis()
val idleTimeout = conn.idleTimeout.toMillis()
if (readIdleTimeout > 0 || writeIdleTimeout > 0 || idleTimeout > 0) {
if(readIdleTimeout > 0 || writeIdleTimeout > 0 || idleTimeout > 0) {
pipeline.addLast(
IdleStateHandler(
true,
@@ -336,19 +340,16 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
pipeline.addLast(object : ChannelInboundHandlerAdapter() {
override fun userEventTriggered(ctx: ChannelHandlerContext, evt: Any) {
if (evt is IdleStateEvent) {
when (evt.state()) {
when(evt.state()) {
IdleState.READER_IDLE -> log.debug {
"Read timeout reached on channel ${ch.id().asShortText()}, closing the connection"
}
IdleState.WRITER_IDLE -> log.debug {
"Write timeout reached on channel ${ch.id().asShortText()}, closing the connection"
}
IdleState.ALL_IDLE -> log.debug {
"Idle timeout reached on channel ${ch.id().asShortText()}, closing the connection"
}
null -> throw IllegalStateException("This should never happen")
}
ctx.close()
@@ -361,53 +362,34 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
pipeline.addLast(HttpServerCodec())
pipeline.addLast(HttpChunkContentCompressor(1024))
pipeline.addLast(ChunkedWriteHandler())
pipeline.addLast(HttpObjectAggregator(cfg.connection.maxRequestSize))
authenticator?.let {
pipeline.addLast(it)
}
pipeline.addLast(ThrottlingHandler(cfg))
val serverHandler = let {
val prefix = Path.of("/").resolve(Path.of(cfg.serverPath ?: "/"))
ServerHandler(cache, prefix)
}
pipeline.addLast(throttlingHandler)
pipeline.addLast(eventExecutorGroup, serverHandler)
pipeline.addLast(exceptionHandler)
}
override fun close() {
cache.close()
}
}
class ServerHandle(
httpChannelFuture: ChannelFuture,
private val executorGroups: Iterable<EventExecutorGroup>,
private val serverInitializer: AutoCloseable
private val executorGroups: Iterable<EventExecutorGroup>
) : AutoCloseable {
private val httpChannel: Channel = httpChannelFuture.channel()
private val closeFuture: ChannelFuture = httpChannel.closeFuture()
private val log = contextLogger()
fun shutdown(): Future<Void> {
fun shutdown(): ChannelFuture {
return httpChannel.close()
}
override fun close() {
try {
closeFuture.sync()
} catch (ex: Throwable) {
log.error(ex.message, ex)
}
try {
serverInitializer.close()
} catch (ex: Throwable) {
log.error(ex.message, ex)
}
executorGroups.forEach {
try {
} finally {
executorGroups.forEach {
it.shutdownGracefully().sync()
} catch (ex: Throwable) {
log.error(ex.message, ex)
}
}
log.info {
@@ -429,12 +411,11 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
}
DefaultEventExecutorGroup(Runtime.getRuntime().availableProcessors(), threadFactory)
}
val serverInitializer = ServerInitializer(cfg, eventExecutorGroup)
val bootstrap = ServerBootstrap().apply {
// Configure the server
group(bossGroup, workerGroup)
channel(serverSocketChannel)
childHandler(serverInitializer)
childHandler(ServerInitializer(cfg, eventExecutorGroup))
option(ChannelOption.SO_BACKLOG, cfg.incomingConnectionsBacklogSize)
childOption(ChannelOption.SO_KEEPALIVE, true)
}
@@ -446,6 +427,6 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
log.info {
"RemoteBuildCacheServer is listening on ${cfg.host}:${cfg.port}"
}
return ServerHandle(httpChannel, setOf(bossGroup, workerGroup, eventExecutorGroup), serverInitializer)
return ServerHandle(httpChannel, setOf(bossGroup, workerGroup, eventExecutorGroup))
}
}

View File

@@ -6,7 +6,6 @@ import io.netty.channel.ChannelHandlerContext
import io.netty.channel.ChannelInboundHandlerAdapter
import io.netty.handler.codec.http.DefaultFullHttpResponse
import io.netty.handler.codec.http.FullHttpResponse
import io.netty.handler.codec.http.HttpContent
import io.netty.handler.codec.http.HttpHeaderNames
import io.netty.handler.codec.http.HttpRequest
import io.netty.handler.codec.http.HttpResponseStatus
@@ -58,8 +57,6 @@ abstract class AbstractNettyHttpAuthenticator(private val authorizer: Authorizer
} else {
authorizationFailure(ctx, msg)
}
} else if(msg is HttpContent) {
ctx.fireChannelRead(msg)
}
}

View File

@@ -1,16 +1,13 @@
package net.woggioni.rbcs.server.cache
import io.netty.buffer.ByteBufAllocator
import net.woggioni.jwo.JWO
import io.netty.buffer.ByteBuf
import net.woggioni.rbcs.api.Cache
import net.woggioni.rbcs.api.RequestHandle
import net.woggioni.rbcs.api.ResponseHandle
import net.woggioni.rbcs.api.event.RequestStreamingEvent
import net.woggioni.rbcs.api.event.ResponseStreamingEvent
import net.woggioni.rbcs.common.ByteBufOutputStream
import net.woggioni.rbcs.common.ByteBufInputStream
import net.woggioni.rbcs.common.RBCS.digestString
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.extractChunk
import net.woggioni.jwo.JWO
import net.woggioni.jwo.LockFile
import java.nio.channels.Channels
import java.nio.channels.FileChannel
import java.nio.file.Files
import java.nio.file.Path
@@ -21,8 +18,10 @@ import java.security.MessageDigest
import java.time.Duration
import java.time.Instant
import java.util.concurrent.CompletableFuture
import java.util.concurrent.atomic.AtomicReference
import java.util.zip.Deflater
import java.util.zip.DeflaterOutputStream
import java.util.zip.Inflater
import java.util.zip.InflaterInputStream
class FileSystemCache(
@@ -30,8 +29,7 @@ class FileSystemCache(
val maxAge: Duration,
val digestAlgorithm: String?,
val compressionEnabled: Boolean,
val compressionLevel: Int,
val chunkSize: Int
val compressionLevel: Int
) : Cache {
private companion object {
@@ -43,159 +41,90 @@ class FileSystemCache(
Files.createDirectories(root)
}
@Volatile
private var running = true
private var nextGc = AtomicReference(Instant.now().plus(maxAge))
private var nextGc = Instant.now()
override fun get(key: String) = (digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digestString(key.toByteArray(), md)
} ?: key).let { digest ->
root.resolve(digest).takeIf(Files::exists)
?.let { file ->
file.takeIf(Files::exists)?.let { file ->
if (compressionEnabled) {
val inflater = Inflater()
Channels.newChannel(
InflaterInputStream(
Channels.newInputStream(
FileChannel.open(
file,
StandardOpenOption.READ
)
), inflater
)
)
} else {
FileChannel.open(file, StandardOpenOption.READ)
}
}
}.also {
gc()
}.let {
CompletableFuture.completedFuture(it)
}
}
override fun get(key: String, responseHandle: ResponseHandle, alloc: ByteBufAllocator) {
override fun put(key: String, content: ByteBuf): CompletableFuture<Void> {
(digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digestString(key.toByteArray(), md)
} ?: key).let { digest ->
root.resolve(digest).takeIf(Files::exists)
?.let { file ->
file.takeIf(Files::exists)?.let { file ->
responseHandle.handleEvent(ResponseStreamingEvent.RESPONSE_RECEIVED)
if (compressionEnabled) {
val compositeBuffer = alloc.compositeBuffer()
ByteBufOutputStream(compositeBuffer).use { outputStream ->
InflaterInputStream(Files.newInputStream(file)).use { inputStream ->
val ioBuffer = alloc.buffer(chunkSize)
try {
while (true) {
val read = ioBuffer.writeBytes(inputStream, chunkSize)
val last = read < 0
if (read > 0) {
ioBuffer.readBytes(outputStream, read)
}
if (last) {
compositeBuffer.retain()
outputStream.close()
}
if (compositeBuffer.readableBytes() >= chunkSize || last) {
val chunk = extractChunk(compositeBuffer, alloc)
val evt = if (last) {
ResponseStreamingEvent.LastChunkReceived(chunk)
} else {
ResponseStreamingEvent.ChunkReceived(chunk)
}
responseHandle.handleEvent(evt)
}
if (last) break
}
} finally {
ioBuffer.release()
}
}
}
} else {
responseHandle.handleEvent(
ResponseStreamingEvent.FileReceived(
FileChannel.open(file, StandardOpenOption.READ)
)
)
}
}
} ?: responseHandle.handleEvent(ResponseStreamingEvent.NOT_FOUND)
}
}
override fun put(
key: String,
responseHandle: ResponseHandle,
alloc: ByteBufAllocator
): CompletableFuture<RequestHandle> {
try {
(digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digestString(key.toByteArray(), md)
} ?: key).let { digest ->
val file = root.resolve(digest)
val tmpFile = Files.createTempFile(root, null, ".tmp")
val stream = Files.newOutputStream(tmpFile).let {
val file = root.resolve(digest)
val tmpFile = Files.createTempFile(root, null, ".tmp")
try {
Files.newOutputStream(tmpFile).let {
if (compressionEnabled) {
val deflater = Deflater(compressionLevel)
DeflaterOutputStream(it, deflater)
} else {
it
}
}.use {
JWO.copy(ByteBufInputStream(content), it)
}
return CompletableFuture.completedFuture(object : RequestHandle {
override fun handleEvent(evt: RequestStreamingEvent) {
try {
when (evt) {
is RequestStreamingEvent.LastChunkReceived -> {
evt.chunk.readBytes(stream, evt.chunk.readableBytes())
stream.close()
Files.move(tmpFile, file, StandardCopyOption.ATOMIC_MOVE)
responseHandle.handleEvent(ResponseStreamingEvent.RESPONSE_RECEIVED)
}
is RequestStreamingEvent.ChunkReceived -> {
evt.chunk.readBytes(stream, evt.chunk.readableBytes())
}
is RequestStreamingEvent.ExceptionCaught -> {
Files.delete(tmpFile)
stream.close()
}
}
} catch (ex: Throwable) {
responseHandle.handleEvent(ResponseStreamingEvent.ExceptionCaught(ex))
}
}
})
Files.move(tmpFile, file, StandardCopyOption.ATOMIC_MOVE)
} catch (t: Throwable) {
Files.delete(tmpFile)
throw t
}
} catch (ex: Throwable) {
responseHandle.handleEvent(ResponseStreamingEvent.ExceptionCaught(ex))
return CompletableFuture.failedFuture(ex)
}
}
private val garbageCollector = Thread.ofVirtual().name("file-system-cache-gc").start {
while (running) {
}.also {
gc()
}
return CompletableFuture.completedFuture(null)
}
private fun gc() {
val now = Instant.now()
if (nextGc < now) {
val oldestEntry = actualGc(now)
nextGc = (oldestEntry ?: now).plus(maxAge)
val oldValue = nextGc.getAndSet(now.plus(maxAge))
if (oldValue < now) {
actualGc(now)
}
Thread.sleep(minOf(Duration.between(now, nextGc), Duration.ofSeconds(1)))
}
/**
* Returns the creation timestamp of the oldest cache entry (if any)
*/
private fun actualGc(now: Instant): Instant? {
var result: Instant? = null
Files.list(root)
.filter { path ->
JWO.splitExtension(path)
.map { it._2 }
.map { it != ".tmp" }
.orElse(true)
@Synchronized
private fun actualGc(now: Instant) {
Files.list(root).filter {
val creationTimeStamp = Files.readAttributes(it, BasicFileAttributes::class.java)
.creationTime()
.toInstant()
now > creationTimeStamp.plus(maxAge)
}.forEach { file ->
LockFile.acquire(file, false).use {
Files.delete(file)
}
.filter {
val creationTimeStamp = Files.readAttributes(it, BasicFileAttributes::class.java)
.creationTime()
.toInstant()
if (result == null || creationTimeStamp < result) {
result = creationTimeStamp
}
now > creationTimeStamp.plus(maxAge)
}.forEach(Files::delete)
return result
}
}
override fun close() {
running = false
garbageCollector.join()
}
override fun close() {}
}

View File

@@ -12,15 +12,13 @@ data class FileSystemCacheConfiguration(
val digestAlgorithm : String?,
val compressionEnabled: Boolean,
val compressionLevel: Int,
val chunkSize: Int,
) : Configuration.Cache {
override fun materialize() = FileSystemCache(
root ?: Application.builder("rbcs").build().computeCacheDirectory(),
maxAge,
digestAlgorithm,
compressionEnabled,
compressionLevel,
chunkSize,
compressionLevel
)
override fun getNamespaceURI() = RBCS.RBCS_NAMESPACE_URI

View File

@@ -31,17 +31,13 @@ class FileSystemCacheProvider : CacheProvider<FileSystemCacheConfiguration> {
?.let(String::toInt)
?: Deflater.DEFAULT_COMPRESSION
val digestAlgorithm = el.renderAttribute("digest") ?: "MD5"
val chunkSize = el.renderAttribute("chunk-size")
?.let(Integer::decode)
?: 0x4000
return FileSystemCacheConfiguration(
path,
maxAge,
digestAlgorithm,
enableCompression,
compressionLevel,
chunkSize
compressionLevel
)
}
@@ -61,7 +57,6 @@ class FileSystemCacheProvider : CacheProvider<FileSystemCacheConfiguration> {
}?.let {
attr("compression-level", it.toString())
}
attr("chunk-size", chunkSize.toString())
}
result
}

View File

@@ -1,36 +1,31 @@
package net.woggioni.rbcs.server.cache
import io.netty.buffer.ByteBuf
import io.netty.buffer.ByteBufAllocator
import net.woggioni.rbcs.api.Cache
import net.woggioni.rbcs.api.RequestHandle
import net.woggioni.rbcs.api.ResponseHandle
import net.woggioni.rbcs.api.event.RequestStreamingEvent
import net.woggioni.rbcs.api.event.ResponseStreamingEvent
import net.woggioni.rbcs.common.ByteBufInputStream
import net.woggioni.rbcs.common.ByteBufOutputStream
import net.woggioni.rbcs.common.RBCS.digestString
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.extractChunk
import net.woggioni.jwo.JWO
import java.nio.channels.Channels
import java.security.MessageDigest
import java.time.Duration
import java.time.Instant
import java.util.concurrent.CompletableFuture
import java.util.concurrent.ConcurrentHashMap
import java.util.concurrent.PriorityBlockingQueue
import java.util.concurrent.TimeUnit
import java.util.concurrent.atomic.AtomicLong
import java.util.zip.Deflater
import java.util.zip.DeflaterOutputStream
import java.util.zip.Inflater
import java.util.zip.InflaterOutputStream
import java.util.zip.InflaterInputStream
class InMemoryCache(
private val maxAge: Duration,
private val maxSize: Long,
private val digestAlgorithm: String?,
private val compressionEnabled: Boolean,
private val compressionLevel: Int,
private val chunkSize : Int
val maxAge: Duration,
val maxSize: Long,
val digestAlgorithm: String?,
val compressionEnabled: Boolean,
val compressionLevel: Int
) : Cache {
companion object {
@@ -40,42 +35,45 @@ class InMemoryCache(
private val size = AtomicLong()
private val map = ConcurrentHashMap<String, ByteBuf>()
private class RemovalQueueElement(val key: String, val value: ByteBuf, val expiry: Instant) :
Comparable<RemovalQueueElement> {
private class RemovalQueueElement(val key: String, val value : ByteBuf, val expiry : Instant) : Comparable<RemovalQueueElement> {
override fun compareTo(other: RemovalQueueElement) = expiry.compareTo(other.expiry)
}
private val removalQueue = PriorityBlockingQueue<RemovalQueueElement>()
@Volatile
private var running = true
private val garbageCollector = Thread.ofVirtual().name("in-memory-cache-gc").start {
while (running) {
val el = removalQueue.poll(1, TimeUnit.SECONDS) ?: continue
private val garbageCollector = Thread {
while(true) {
val el = removalQueue.take()
val buf = el.value
val now = Instant.now()
if (now > el.expiry) {
if(now > el.expiry) {
val removed = map.remove(el.key, buf)
if (removed) {
if(removed) {
updateSizeAfterRemoval(buf)
//Decrease the reference count for map
buf.release()
}
//Decrease the reference count for removalQueue
buf.release()
} else {
removalQueue.put(el)
Thread.sleep(minOf(Duration.between(now, el.expiry), Duration.ofSeconds(1)))
}
}
}.apply {
start()
}
private fun removeEldest(): Long {
while (true) {
private fun removeEldest() : Long {
while(true) {
val el = removalQueue.take()
val buf = el.value
val removed = map.remove(el.key, buf)
if (removed) {
//Decrease the reference count for removalQueue
buf.release()
if(removed) {
val newSize = updateSizeAfterRemoval(buf)
//Decrease the reference count for map
buf.release()
@@ -84,8 +82,8 @@ class InMemoryCache(
}
}
private fun updateSizeAfterRemoval(removed: ByteBuf): Long {
return size.updateAndGet { currentSize: Long ->
private fun updateSizeAfterRemoval(removed: ByteBuf) : Long {
return size.updateAndGet { currentSize : Long ->
currentSize - removed.readableBytes()
}
}
@@ -95,114 +93,58 @@ class InMemoryCache(
garbageCollector.join()
}
override fun get(key: String, responseHandle: ResponseHandle, alloc: ByteBufAllocator) {
try {
(digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digestString(key.toByteArray(), md)
} ?: key
).let { digest ->
map[digest]
?.let { value ->
val copy = value.retainedDuplicate()
responseHandle.handleEvent(ResponseStreamingEvent.RESPONSE_RECEIVED)
val output = alloc.compositeBuffer()
if (compressionEnabled) {
try {
val stream = ByteBufOutputStream(output).let {
val inflater = Inflater()
InflaterOutputStream(it, inflater)
}
stream.use { os ->
var readable = copy.readableBytes()
while (true) {
copy.readBytes(os, chunkSize.coerceAtMost(readable))
readable = copy.readableBytes()
val last = readable == 0
if (last) stream.flush()
if (output.readableBytes() >= chunkSize || last) {
val chunk = extractChunk(output, alloc)
val evt = if (last) {
ResponseStreamingEvent.LastChunkReceived(chunk)
} else {
ResponseStreamingEvent.ChunkReceived(chunk)
}
responseHandle.handleEvent(evt)
}
if (last) break
}
}
} finally {
copy.release()
}
} else {
responseHandle.handleEvent(
ResponseStreamingEvent.LastChunkReceived(copy)
)
}
} ?: responseHandle.handleEvent(ResponseStreamingEvent.NOT_FOUND)
override fun get(key: String) =
(digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digestString(key.toByteArray(), md)
} ?: key
).let { digest ->
map[digest]
?.let { value ->
val copy = value.retainedDuplicate()
copy.touch("This has to be released by the caller of the cache")
if (compressionEnabled) {
val inflater = Inflater()
Channels.newChannel(InflaterInputStream(ByteBufInputStream(copy), inflater))
} else {
Channels.newChannel(ByteBufInputStream(copy))
}
}
} catch (ex: Throwable) {
responseHandle.handleEvent(ResponseStreamingEvent.ExceptionCaught(ex))
}.let {
CompletableFuture.completedFuture(it)
}
override fun put(key: String, content: ByteBuf) =
(digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digestString(key.toByteArray(), md)
} ?: key).let { digest ->
content.retain()
val value = if (compressionEnabled) {
val deflater = Deflater(compressionLevel)
val buf = content.alloc().buffer()
buf.retain()
DeflaterOutputStream(ByteBufOutputStream(buf), deflater).use { outputStream ->
ByteBufInputStream(content).use { inputStream ->
JWO.copy(inputStream, outputStream)
}
}
buf
} else {
content
}
val old = map.put(digest, value)
val delta = value.readableBytes() - (old?.readableBytes() ?: 0)
var newSize = size.updateAndGet { currentSize : Long ->
currentSize + delta
}
removalQueue.put(RemovalQueueElement(digest, value.retain(), Instant.now().plus(maxAge)))
while(newSize > maxSize) {
newSize = removeEldest()
}
}.let {
CompletableFuture.completedFuture<Void>(null)
}
}
override fun put(
key: String,
responseHandle: ResponseHandle,
alloc: ByteBufAllocator
): CompletableFuture<RequestHandle> {
return CompletableFuture.completedFuture(object : RequestHandle {
val buf = alloc.heapBuffer()
val stream = ByteBufOutputStream(buf).let {
if (compressionEnabled) {
val deflater = Deflater(compressionLevel)
DeflaterOutputStream(it, deflater)
} else {
it
}
}
override fun handleEvent(evt: RequestStreamingEvent) {
when (evt) {
is RequestStreamingEvent.ChunkReceived -> {
evt.chunk.readBytes(stream, evt.chunk.readableBytes())
if (evt is RequestStreamingEvent.LastChunkReceived) {
(digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digestString(key.toByteArray(), md)
} ?: key
).let { digest ->
val oldSize = map.put(digest, buf.retain())?.let { old ->
val result = old.readableBytes()
old.release()
result
} ?: 0
val delta = buf.readableBytes() - oldSize
var newSize = size.updateAndGet { currentSize : Long ->
currentSize + delta
}
removalQueue.put(RemovalQueueElement(digest, buf, Instant.now().plus(maxAge)))
while(newSize > maxSize) {
newSize = removeEldest()
}
stream.close()
responseHandle.handleEvent(ResponseStreamingEvent.RESPONSE_RECEIVED)
}
}
}
is RequestStreamingEvent.ExceptionCaught -> {
stream.close()
}
else -> {
}
}
}
})
}
}

View File

@@ -10,15 +10,13 @@ data class InMemoryCacheConfiguration(
val digestAlgorithm : String?,
val compressionEnabled: Boolean,
val compressionLevel: Int,
val chunkSize : Int
) : Configuration.Cache {
override fun materialize() = InMemoryCache(
maxAge,
maxSize,
digestAlgorithm,
compressionEnabled,
compressionLevel,
chunkSize
compressionLevel
)
override fun getNamespaceURI() = RBCS.RBCS_NAMESPACE_URI

View File

@@ -31,16 +31,13 @@ class InMemoryCacheProvider : CacheProvider<InMemoryCacheConfiguration> {
?.let(String::toInt)
?: Deflater.DEFAULT_COMPRESSION
val digestAlgorithm = el.renderAttribute("digest") ?: "MD5"
val chunkSize = el.renderAttribute("chunk-size")
?.let(Integer::decode)
?: 0x4000
return InMemoryCacheConfiguration(
maxAge,
maxSize,
digestAlgorithm,
enableCompression,
compressionLevel,
chunkSize
compressionLevel
)
}
@@ -60,7 +57,6 @@ class InMemoryCacheProvider : CacheProvider<InMemoryCacheConfiguration> {
}?.let {
attr("compression-level", it.toString())
}
attr("chunk-size", chunkSize.toString())
}
result
}

View File

@@ -124,7 +124,7 @@ object Parser {
val writeIdleTimeout = child.renderAttribute("write-idle-timeout")
?.let(Duration::parse) ?: Duration.of(60, ChronoUnit.SECONDS)
val maxRequestSize = child.renderAttribute("max-request-size")
?.let(Integer::decode) ?: 0x4000000
?.let(String::toInt) ?: 67108864
connection = Configuration.Connection(
readTimeout,
writeTimeout,

View File

@@ -17,8 +17,6 @@ import net.woggioni.rbcs.api.exception.CacheException
import net.woggioni.rbcs.api.exception.ContentTooLargeException
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.debug
import java.net.SocketException
import javax.net.ssl.SSLException
import javax.net.ssl.SSLPeerUnverifiedException
@ChannelHandler.Sharable
@@ -52,12 +50,7 @@ class ExceptionHandler : ChannelDuplexHandler() {
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
when (cause) {
is DecoderException -> {
log.debug(cause.message, cause)
ctx.close()
}
is SocketException -> {
log.debug(cause.message, cause)
log.error(cause.message, cause)
ctx.close()
}
@@ -66,16 +59,10 @@ class ExceptionHandler : ChannelDuplexHandler() {
.addListener(ChannelFutureListener.CLOSE_ON_FAILURE)
}
is SSLException -> {
log.debug(cause.message, cause)
ctx.close()
}
is ContentTooLargeException -> {
ctx.writeAndFlush(TOO_BIG.retainedDuplicate())
.addListener(ChannelFutureListener.CLOSE_ON_FAILURE)
}
is ReadTimeoutException -> {
log.debug {
val channelId = ctx.channel().id().asShortText()
@@ -83,7 +70,6 @@ class ExceptionHandler : ChannelDuplexHandler() {
}
ctx.close()
}
is WriteTimeoutException -> {
log.debug {
val channelId = ctx.channel().id().asShortText()
@@ -91,13 +77,11 @@ class ExceptionHandler : ChannelDuplexHandler() {
}
ctx.close()
}
is CacheException -> {
log.error(cause.message, cause)
ctx.writeAndFlush(NOT_AVAILABLE.retainedDuplicate())
.addListener(ChannelFutureListener.CLOSE_ON_FAILURE)
}
else -> {
log.error(cause.message, cause)
ctx.writeAndFlush(SERVER_ERROR.retainedDuplicate())

View File

@@ -2,66 +2,34 @@ package net.woggioni.rbcs.server.handler
import io.netty.buffer.Unpooled
import io.netty.channel.ChannelFutureListener
import io.netty.channel.ChannelHandler
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.DefaultFileRegion
import io.netty.channel.SimpleChannelInboundHandler
import io.netty.handler.codec.http.DefaultFullHttpResponse
import io.netty.handler.codec.http.DefaultHttpContent
import io.netty.handler.codec.http.DefaultHttpResponse
import io.netty.handler.codec.http.DefaultLastHttpContent
import io.netty.handler.codec.http.HttpContent
import io.netty.handler.codec.http.FullHttpRequest
import io.netty.handler.codec.http.HttpHeaderNames
import io.netty.handler.codec.http.HttpHeaderValues
import io.netty.handler.codec.http.HttpMethod
import io.netty.handler.codec.http.HttpObject
import io.netty.handler.codec.http.HttpRequest
import io.netty.handler.codec.http.HttpResponseStatus
import io.netty.handler.codec.http.HttpUtil
import io.netty.handler.codec.http.LastHttpContent
import io.netty.handler.stream.ChunkedNioStream
import net.woggioni.rbcs.api.Cache
import net.woggioni.rbcs.api.RequestHandle
import net.woggioni.rbcs.api.ResponseHandle
import net.woggioni.rbcs.api.event.RequestStreamingEvent
import net.woggioni.rbcs.api.event.ResponseStreamingEvent
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.debug
import net.woggioni.rbcs.server.debug
import net.woggioni.rbcs.server.warn
import java.nio.channels.FileChannel
import java.nio.file.Path
import java.util.concurrent.CompletableFuture
@ChannelHandler.Sharable
class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
SimpleChannelInboundHandler<HttpObject>() {
SimpleChannelInboundHandler<FullHttpRequest>() {
private val log = contextLogger()
override fun channelRead0(ctx: ChannelHandlerContext, msg: HttpObject) {
when(msg) {
is HttpRequest -> handleRequest(ctx, msg)
is HttpContent -> handleContent(msg)
}
}
private var requestHandle : CompletableFuture<RequestHandle?> = CompletableFuture.completedFuture(null)
private fun handleContent(content : HttpContent) {
content.retain()
requestHandle.thenAccept { handle ->
handle?.let {
val evt = if(content is LastHttpContent) {
RequestStreamingEvent.LastChunkReceived(content.content())
} else {
RequestStreamingEvent.ChunkReceived(content.content())
}
it.handleEvent(evt)
content.release()
} ?: content.release()
}
}
private fun handleRequest(ctx : ChannelHandlerContext, msg : HttpRequest) {
override fun channelRead0(ctx: ChannelHandlerContext, msg: FullHttpRequest) {
val keepAlive: Boolean = HttpUtil.isKeepAlive(msg)
val method = msg.method()
if (method === HttpMethod.GET) {
@@ -74,55 +42,54 @@ class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
return
}
if (serverPrefix == prefix) {
val responseHandle = ResponseHandle { evt ->
when (evt) {
is ResponseStreamingEvent.ResponseReceived -> {
val response = DefaultHttpResponse(msg.protocolVersion(), HttpResponseStatus.OK)
response.headers()[HttpHeaderNames.CONTENT_TYPE] = HttpHeaderValues.APPLICATION_OCTET_STREAM
if (!keepAlive) {
response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE)
} else {
response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE)
}
cache.get(key).thenApply { channel ->
if(channel != null) {
log.debug(ctx) {
"Cache hit for key '$key'"
}
val response = DefaultHttpResponse(msg.protocolVersion(), HttpResponseStatus.OK)
response.headers()[HttpHeaderNames.CONTENT_TYPE] = HttpHeaderValues.APPLICATION_OCTET_STREAM
if (!keepAlive) {
response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE)
response.headers().set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.IDENTITY)
} else {
response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE)
response.headers().set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED)
ctx.writeAndFlush(response)
}
is ResponseStreamingEvent.LastChunkReceived -> {
val channelFuture = ctx.writeAndFlush(DefaultLastHttpContent(evt.chunk))
if (!keepAlive) {
channelFuture
.addListener(ChannelFutureListener.CLOSE)
ctx.write(response)
when (channel) {
is FileChannel -> {
val content = DefaultFileRegion(channel, 0, channel.size())
if (keepAlive) {
ctx.write(content)
ctx.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT.retainedDuplicate())
} else {
ctx.writeAndFlush(content)
.addListener(ChannelFutureListener.CLOSE)
}
}
else -> {
val content = ChunkedNioStream(channel)
if (keepAlive) {
ctx.write(content).addListener {
content.close()
}
ctx.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT.retainedDuplicate())
} else {
ctx.writeAndFlush(content)
.addListener(ChannelFutureListener.CLOSE)
}
}
}
is ResponseStreamingEvent.ChunkReceived -> {
ctx.writeAndFlush(DefaultHttpContent(evt.chunk))
}
is ResponseStreamingEvent.ExceptionCaught -> {
ctx.fireExceptionCaught(evt.exception)
}
is ResponseStreamingEvent.NotFound -> {
val response = DefaultFullHttpResponse(msg.protocolVersion(), HttpResponseStatus.NOT_FOUND)
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = 0
ctx.writeAndFlush(response)
}
is ResponseStreamingEvent.FileReceived -> {
val content = DefaultFileRegion(evt.file, 0, evt.file.size())
if (keepAlive) {
ctx.write(content)
ctx.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT.retainedDuplicate())
} else {
ctx.writeAndFlush(content)
.addListener(ChannelFutureListener.CLOSE)
}
} else {
log.debug(ctx) {
"Cache miss for key '$key'"
}
val response = DefaultFullHttpResponse(msg.protocolVersion(), HttpResponseStatus.NOT_FOUND)
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = 0
ctx.writeAndFlush(response)
}
}
cache.get(key, responseHandle, ctx.alloc())
}.whenComplete { _, ex -> ex?.let(ctx::fireExceptionCaught) }
} else {
log.warn(ctx) {
"Got request for unhandled path '${msg.uri()}'"
@@ -140,32 +107,15 @@ class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
log.debug(ctx) {
"Added value for key '$key' to build cache"
}
val responseHandle = ResponseHandle { evt ->
when (evt) {
is ResponseStreamingEvent.ResponseReceived -> {
val response = DefaultFullHttpResponse(
msg.protocolVersion(), HttpResponseStatus.CREATED,
Unpooled.copiedBuffer(key.toByteArray())
)
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = response.content().readableBytes()
ctx.writeAndFlush(response)
this.requestHandle = CompletableFuture.completedFuture(null)
}
is ResponseStreamingEvent.ChunkReceived -> {
evt.chunk.release()
}
is ResponseStreamingEvent.ExceptionCaught -> {
ctx.fireExceptionCaught(evt.exception)
}
else -> {}
}
}
this.requestHandle = cache.put(key, responseHandle, ctx.alloc()).exceptionally { ex ->
cache.put(key, msg.content()).thenRun {
val response = DefaultFullHttpResponse(
msg.protocolVersion(), HttpResponseStatus.CREATED,
Unpooled.copiedBuffer(key.toByteArray())
)
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = response.content().readableBytes()
ctx.writeAndFlush(response)
}.whenComplete { _, ex ->
ctx.fireExceptionCaught(ex)
null
}.also {
log.debug { "Replacing request handle with $it"}
}
} else {
log.warn(ctx) {
@@ -175,12 +125,9 @@ class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = "0"
ctx.writeAndFlush(response)
}
} else if (method == HttpMethod.TRACE) {
} else if(method == HttpMethod.TRACE) {
val replayedRequestHead = ctx.alloc().buffer()
replayedRequestHead.writeCharSequence(
"TRACE ${Path.of(msg.uri())} ${msg.protocolVersion().text()}\r\n",
Charsets.US_ASCII
)
replayedRequestHead.writeCharSequence("TRACE ${Path.of(msg.uri())} ${msg.protocolVersion().text()}\r\n", Charsets.US_ASCII)
msg.headers().forEach { (key, value) ->
replayedRequestHead.apply {
writeCharSequence(key, Charsets.US_ASCII)
@@ -190,24 +137,16 @@ class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
}
}
replayedRequestHead.writeCharSequence("\r\n", Charsets.US_ASCII)
this.requestHandle = CompletableFuture.completedFuture(RequestHandle { evt ->
when(evt) {
is RequestStreamingEvent.LastChunkReceived -> {
ctx.writeAndFlush(DefaultLastHttpContent(evt.chunk.retain()))
this.requestHandle = CompletableFuture.completedFuture(null)
}
is RequestStreamingEvent.ChunkReceived -> ctx.writeAndFlush(DefaultHttpContent(evt.chunk.retain()))
is RequestStreamingEvent.ExceptionCaught -> ctx.fireExceptionCaught(evt.exception)
else -> {
}
}
}).also {
log.debug { "Replacing request handle with $it"}
val requestBody = msg.content()
requestBody.retain()
val responseBody = ctx.alloc().compositeBuffer(2).apply {
addComponents(true, replayedRequestHead)
addComponents(true, requestBody)
}
val response = DefaultHttpResponse(msg.protocolVersion(), HttpResponseStatus.OK)
val response = DefaultFullHttpResponse(msg.protocolVersion(), HttpResponseStatus.OK, responseBody)
response.headers().apply {
set(HttpHeaderNames.CONTENT_TYPE, "message/http")
set(HttpHeaderNames.CONTENT_LENGTH, responseBody.readableBytes())
}
ctx.writeAndFlush(response)
} else {
@@ -219,11 +158,4 @@ class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
ctx.writeAndFlush(response)
}
}
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
requestHandle.thenAccept { handle ->
handle?.handleEvent(RequestStreamingEvent.ExceptionCaught(cause))
}
super.exceptionCaught(ctx, cause)
}
}

View File

@@ -1,11 +1,10 @@
package net.woggioni.rbcs.server.throttling
import io.netty.channel.ChannelHandler.Sharable
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.ChannelInboundHandlerAdapter
import io.netty.handler.codec.http.DefaultFullHttpResponse
import io.netty.handler.codec.http.HttpContent
import io.netty.handler.codec.http.HttpHeaderNames
import io.netty.handler.codec.http.HttpRequest
import io.netty.handler.codec.http.HttpResponseStatus
import io.netty.handler.codec.http.HttpVersion
import net.woggioni.rbcs.api.Configuration
@@ -19,19 +18,15 @@ import java.time.temporal.ChronoUnit
import java.util.concurrent.TimeUnit
class ThrottlingHandler(cfg: Configuration) : ChannelInboundHandlerAdapter() {
private companion object {
@JvmStatic
private val log = contextLogger()
}
@Sharable
class ThrottlingHandler(cfg: Configuration) :
ChannelInboundHandlerAdapter() {
private val log = contextLogger()
private val bucketManager = BucketManager.from(cfg)
private val connectionConfiguration = cfg.connection
private var queuedContent : MutableList<HttpContent>? = null
/**
* If the suggested waiting time from the bucket is lower than this
* amount, then the server will simply wait by itself before sending a response
@@ -43,38 +38,29 @@ class ThrottlingHandler(cfg: Configuration) : ChannelInboundHandlerAdapter() {
connectionConfiguration.writeIdleTimeout
).dividedBy(2)
override fun channelRead(ctx: ChannelHandlerContext, msg: Any) {
if(msg is HttpRequest) {
val buckets = mutableListOf<Bucket>()
val user = ctx.channel().attr(RemoteBuildCacheServer.userAttribute).get()
if (user != null) {
bucketManager.getBucketByUser(user)?.let(buckets::addAll)
val buckets = mutableListOf<Bucket>()
val user = ctx.channel().attr(RemoteBuildCacheServer.userAttribute).get()
if (user != null) {
bucketManager.getBucketByUser(user)?.let(buckets::addAll)
}
val groups = ctx.channel().attr(RemoteBuildCacheServer.groupAttribute).get() ?: emptySet()
if (groups.isNotEmpty()) {
groups.forEach { group ->
bucketManager.getBucketByGroup(group)?.let(buckets::add)
}
val groups = ctx.channel().attr(RemoteBuildCacheServer.groupAttribute).get() ?: emptySet()
if (groups.isNotEmpty()) {
groups.forEach { group ->
bucketManager.getBucketByGroup(group)?.let(buckets::add)
}
}
if (user == null && groups.isEmpty()) {
bucketManager.getBucketByAddress(ctx.channel().remoteAddress() as InetSocketAddress)?.let(buckets::add)
}
if (buckets.isEmpty()) {
super.channelRead(ctx, msg)
} else {
handleBuckets(buckets, ctx, msg, true)
}
ctx.channel().id()
} else if(msg is HttpContent) {
queuedContent?.add(msg) ?: super.channelRead(ctx, msg)
}
if (user == null && groups.isEmpty()) {
bucketManager.getBucketByAddress(ctx.channel().remoteAddress() as InetSocketAddress)?.let(buckets::add)
}
if (buckets.isEmpty()) {
return super.channelRead(ctx, msg)
} else {
super.channelRead(ctx, msg)
handleBuckets(buckets, ctx, msg, true)
}
}
private fun handleBuckets(buckets: List<Bucket>, ctx: ChannelHandlerContext, msg: Any, delayResponse: Boolean) {
private fun handleBuckets(buckets : List<Bucket>, ctx : ChannelHandlerContext, msg : Any, delayResponse : Boolean) {
var nextAttempt = -1L
for (bucket in buckets) {
val bucketNextAttempt = bucket.removeTokensWithEstimate(1)
@@ -82,24 +68,17 @@ class ThrottlingHandler(cfg: Configuration) : ChannelInboundHandlerAdapter() {
nextAttempt = bucketNextAttempt
}
}
if (nextAttempt < 0) {
if(nextAttempt < 0) {
super.channelRead(ctx, msg)
queuedContent?.let {
for(content in it) {
super.channelRead(ctx, content)
}
queuedContent = null
}
return
}
val waitDuration = Duration.of(LongMath.ceilDiv(nextAttempt, 100_000_000L) * 100L, ChronoUnit.MILLIS)
if (delayResponse && waitDuration < waitThreshold) {
ctx.executor().schedule({
handleBuckets(buckets, ctx, msg, false)
}, waitDuration.toMillis(), TimeUnit.MILLISECONDS)
} else {
val waitDuration = Duration.of(LongMath.ceilDiv(nextAttempt, 100_000_000L) * 100L, ChronoUnit.MILLIS)
if (delayResponse && waitDuration < waitThreshold) {
this.queuedContent = mutableListOf()
ctx.executor().schedule({
handleBuckets(buckets, ctx, msg, false)
}, waitDuration.toMillis(), TimeUnit.MILLISECONDS)
} else {
sendThrottledResponse(ctx, waitDuration)
}
sendThrottledResponse(ctx, waitDuration)
}
}

View File

@@ -39,7 +39,7 @@
<xs:attribute name="idle-timeout" type="xs:duration" use="optional" default="PT30S"/>
<xs:attribute name="read-idle-timeout" type="xs:duration" use="optional" default="PT60S"/>
<xs:attribute name="write-idle-timeout" type="xs:duration" use="optional" default="PT60S"/>
<xs:attribute name="max-request-size" type="rbcs:byteSize" use="optional" default="0x4000000"/>
<xs:attribute name="max-request-size" type="xs:unsignedInt" use="optional" default="67108864"/>
</xs:complexType>
<xs:complexType name="eventExecutorType">
@@ -52,11 +52,10 @@
<xs:complexContent>
<xs:extension base="rbcs:cacheType">
<xs:attribute name="max-age" type="xs:duration" default="P1D"/>
<xs:attribute name="max-size" type="rbcs:byteSize" default="0x1000000"/>
<xs:attribute name="max-size" type="xs:token" default="0x1000000"/>
<xs:attribute name="digest" type="xs:token" default="MD5"/>
<xs:attribute name="enable-compression" type="xs:boolean" default="true"/>
<xs:attribute name="compression-level" type="xs:byte" default="-1"/>
<xs:attribute name="chunk-size" type="rbcs:byteSize" default="0x4000"/>
</xs:extension>
</xs:complexContent>
</xs:complexType>
@@ -69,7 +68,6 @@
<xs:attribute name="digest" type="xs:token" default="MD5"/>
<xs:attribute name="enable-compression" type="xs:boolean" default="true"/>
<xs:attribute name="compression-level" type="xs:byte" default="-1"/>
<xs:attribute name="chunk-size" type="rbcs:byteSize" default="0x4000"/>
</xs:extension>
</xs:complexContent>
</xs:complexType>
@@ -222,10 +220,5 @@
<xs:attribute name="port" type="xs:unsignedShort" use="required"/>
</xs:complexType>
<xs:simpleType name="byteSize">
<xs:restriction base="xs:token">
<xs:pattern value="(0x[a-f0-9]+|[0-9]+)"/>
</xs:restriction>
</xs:simpleType>
</xs:schema>

View File

@@ -47,13 +47,11 @@ abstract class AbstractBasicAuthServerTest : AbstractServerTest() {
),
users.asSequence().map { it.name to it}.toMap(),
sequenceOf(writersGroup, readersGroup).map { it.name to it}.toMap(),
FileSystemCacheConfiguration(
this.cacheDir,
FileSystemCacheConfiguration(this.cacheDir,
maxAge = Duration.ofSeconds(3600 * 24),
digestAlgorithm = "MD5",
compressionLevel = Deflater.DEFAULT_COMPRESSION,
compressionEnabled = false,
chunkSize = 0x1000
compressionEnabled = false
),
Configuration.BasicAuthentication(),
null,

View File

@@ -156,8 +156,7 @@ abstract class AbstractTlsServerTest : AbstractServerTest() {
maxAge = Duration.ofSeconds(3600 * 24),
compressionEnabled = true,
compressionLevel = Deflater.DEFAULT_COMPRESSION,
digestAlgorithm = "MD5",
chunkSize = 0x1000
digestAlgorithm = "MD5"
),
// InMemoryCacheConfiguration(
// maxAge = Duration.ofSeconds(3600 * 24),

View File

@@ -86,7 +86,7 @@ class BasicAuthServerTest : AbstractBasicAuthServerTest() {
@Test
@Order(4)
fun putAsAWriterUser() {
val client: HttpClient = HttpClient.newBuilder().version(HttpClient.Version.HTTP_1_1).build()
val client: HttpClient = HttpClient.newHttpClient()
val (key, value) = keyValuePair
val user = cfg.users.values.find {

View File

@@ -52,8 +52,7 @@ class NoAuthServerTest : AbstractServerTest() {
compressionEnabled = true,
digestAlgorithm = "MD5",
compressionLevel = Deflater.DEFAULT_COMPRESSION,
maxSize = 0x1000000,
chunkSize = 0x1000
maxSize = 0x1000000
),
null,
null,
@@ -81,7 +80,7 @@ class NoAuthServerTest : AbstractServerTest() {
@Test
@Order(1)
fun putWithNoAuthorizationHeader() {
val client: HttpClient = HttpClient.newBuilder().version(HttpClient.Version.HTTP_1_1).build()
val client: HttpClient = HttpClient.newHttpClient()
val (key, value) = keyValuePair
val requestBuilder = newRequestBuilder(key)

View File

@@ -11,7 +11,7 @@
idle-timeout="PT30M"
max-request-size="101325"/>
<event-executor use-virtual-threads="false"/>
<cache xs:type="rbcs:fileSystemCacheType" path="/tmp/rbcs" max-age="P7D" chunk-size="0xa910"/>
<cache xs:type="rbcs:fileSystemCacheType" path="/tmp/rbcs" max-age="P7D"/>
<authentication>
<none/>
</authentication>

View File

@@ -13,7 +13,7 @@
read-timeout="PT5M"
write-timeout="PT5M"/>
<event-executor use-virtual-threads="true"/>
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" max-size="16777216" compression-mode="deflate" chunk-size="123">
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" max-size="16777216" compression-mode="deflate">
<server host="memcached" port="11211"/>
</cache>
<authorization>

View File

@@ -12,7 +12,7 @@
idle-timeout="PT30M"
max-request-size="101325"/>
<event-executor use-virtual-threads="false"/>
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" max-size="101325" digest="SHA-256" chunk-size="456">
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" max-size="101325" digest="SHA-256">
<server host="127.0.0.1" port="11211" max-connections="10" connection-timeout="PT20S"/>
</cache>
<authentication>

View File

@@ -11,7 +11,7 @@
idle-timeout="PT30M"
max-request-size="4096"/>
<event-executor use-virtual-threads="false"/>
<cache xs:type="rbcs:inMemoryCacheType" max-age="P7D" chunk-size="0xa91f"/>
<cache xs:type="rbcs:inMemoryCacheType" max-age="P7D"/>
<authorization>
<users>
<user name="user1" password="password1">

View File

@@ -29,6 +29,6 @@ include 'rbcs-api'
include 'rbcs-common'
include 'rbcs-server-memcache'
include 'rbcs-cli'
include 'docker'
include 'rbcs-client'
include 'rbcs-server'
include 'docker'