Compare commits
1 Commits
0463038aaa
...
doc
Author | SHA1 | Date | |
---|---|---|---|
c19bc9e91e
|
20
LICENSE
Normal file
20
LICENSE
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
The MIT License (MIT)
|
||||||
|
|
||||||
|
Copyright (c) 2017 Y. T. CHUNG <zonyitoo@gmail.com>
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
||||||
|
this software and associated documentation files (the "Software"), to deal in
|
||||||
|
the Software without restriction, including without limitation the rights to
|
||||||
|
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
|
||||||
|
the Software, and to permit persons to whom the Software is furnished to do so,
|
||||||
|
subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in all
|
||||||
|
copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
|
||||||
|
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
|
||||||
|
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
|
||||||
|
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
|
||||||
|
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
110
README.md
110
README.md
@@ -0,0 +1,110 @@
|
|||||||
|
# Remote Build Cache Server
|
||||||
|
Remote Build Cache Server (shortened to RBCS) allows you to share and reuse unchanged build
|
||||||
|
and test outputs across the team. This speeds up local and CI builds since cycles are not wasted
|
||||||
|
re-building components that are unaffected by new code changes. RBCS supports both Gradle and
|
||||||
|
Maven build tool environments.
|
||||||
|
|
||||||
|
## Getting Started
|
||||||
|
|
||||||
|
### Downloading the jar file
|
||||||
|
You can download the latest version from [this link](https://gitea.woggioni.net/woggioni/-/packages/maven/net.woggioni-rbcs-cli/)
|
||||||
|
|
||||||
|
If you want to use memcache as a storage backend you'll also need to download [the memcache plugin](https://gitea.woggioni.net/woggioni/-/packages/maven/net.woggioni-rbcs-server-memcache/)
|
||||||
|
|
||||||
|
### Using the Docker image
|
||||||
|
You can pull the latest Docker image with
|
||||||
|
```bash
|
||||||
|
docker pull gitea.woggioni.net/woggioni/rbcs:latest
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage
|
||||||
|
## Configuration
|
||||||
|
### Using RBCS with Gradle
|
||||||
|
|
||||||
|
```groovy
|
||||||
|
buildCache {
|
||||||
|
remote(HttpBuildCache) {
|
||||||
|
url = 'https://rbcs.example.com/'
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using RBCS with Maven
|
||||||
|
|
||||||
|
Read [here](https://maven.apache.org/extensions/maven-build-cache-extension/remote-cache.html)
|
||||||
|
|
||||||
|
## FAQ
|
||||||
|
### Why should I use a build cache?
|
||||||
|
|
||||||
|
#### Build Caches Improve Build & Test Performance
|
||||||
|
|
||||||
|
Building software consists of a number of steps, like compiling sources, executing tests, and linking binaries. We’ve seen that a binary artifact repository helps when such a step requires an external component by downloading the artifact from the repository rather than building it locally.
|
||||||
|
However, there are many additional steps in this build process which can be optimized to reduce the build time. An obvious strategy is to avoid executing build steps which dominate the total build time when these build steps are not needed.
|
||||||
|
Most build times are dominated by the testing step.
|
||||||
|
|
||||||
|
While binary repositories cannot capture the outcome of a test build step (only the test reports
|
||||||
|
when included in binary artifacts), build caches are designed to eliminate redundant executions
|
||||||
|
for every build step. Moreover, it generalizes the concept of avoiding work associated with any
|
||||||
|
incremental step of the build, including test execution, compilation and resource processing.
|
||||||
|
The mechanism itself is comparable to a pure function. That is, given some inputs such as source
|
||||||
|
files and environment parameters we know that the output is always going to be the same.
|
||||||
|
As a result, we can cache it and retrieve it based on a simple cryptographic hash of the inputs.
|
||||||
|
Build caching is supported natively by some build tools.
|
||||||
|
|
||||||
|
#### Improve CI builds with a remote build cache
|
||||||
|
|
||||||
|
When analyzing the role of a build cache it is important to take into account the granularity
|
||||||
|
of the changes that it caches. Imagine a full build for a project with 40 to 50 modules
|
||||||
|
which fails at the last step (deployment) because the staging environment is temporarily unavailable.
|
||||||
|
Although the vast majority of the build steps (potentially thousands) succeed,
|
||||||
|
the change can not be deployed to the staging environment.
|
||||||
|
Without a build cache one typically relies on a very complex CI configuration to reuse build step outputs
|
||||||
|
or would have to repeat the full build once the environment is available.
|
||||||
|
|
||||||
|
Some build tools don’t support incremental builds properly. For example, outputs of a build started
|
||||||
|
from scratch may vary when compared to subsequent builds that rely on the initial build’s output.
|
||||||
|
As a result, to preserve build integrity, it’s crucial to rebuild from scratch, or ‘cleanly,’ in this
|
||||||
|
scenario.
|
||||||
|
|
||||||
|
With a build cache, only the last step needs to be executed and the build can be re-triggered
|
||||||
|
when the environment is back online. This automatically saves all of the time and
|
||||||
|
resources required across the different build steps which were successfully executed.
|
||||||
|
Instead of executing the intermediate steps, the build tool pulls the outputs from the build cache,
|
||||||
|
avoiding a lot of redundant work
|
||||||
|
|
||||||
|
#### Share outputs with a remote build cache
|
||||||
|
|
||||||
|
One of the most important advantages of a remote build cache is the ability to share build outputs.
|
||||||
|
In most CI configurations, for example, a number of pipelines are created.
|
||||||
|
These may include one for building the sources, one for testing, one for publishing the outcomes
|
||||||
|
to a remote repository, and other pipelines to test on different platforms.
|
||||||
|
There are even situations where CI builds partially build a project (i.e. some modules and not others).
|
||||||
|
|
||||||
|
Most of those pipelines share a lot of intermediate build steps. All builds which perform testing
|
||||||
|
require the binaries to be ready. All publishing builds require all previous steps to be executed.
|
||||||
|
And because modern CI infrastructure means executing everything in containerized (isolated) environments,
|
||||||
|
significant resources are wasted by repeatedly building the same intermediate artifacts.
|
||||||
|
|
||||||
|
A remote build cache greatly reduces this overhead by orders of magnitudes because it provides a way
|
||||||
|
for all those pipelines to share their outputs. After all, there is no point recreating an output that
|
||||||
|
is already available in the cache.
|
||||||
|
|
||||||
|
Because there are inherent dependencies between software components of a build,
|
||||||
|
introducing a build cache dramatically reduces the impact of exploding a component into multiple pieces,
|
||||||
|
allowing for increased modularity without increased overhead.
|
||||||
|
|
||||||
|
#### Make local developers more efficient with remote build caches
|
||||||
|
|
||||||
|
It is common for different teams within a company to work on different modules of a single large
|
||||||
|
application. In this case, most teams don’t care about building the other parts of the software.
|
||||||
|
By introducing a remote cache developers immediately benefit from pre-built artifacts when checking out code.
|
||||||
|
Because it has already been built on CI, they don’t have to do it locally.
|
||||||
|
|
||||||
|
Introducing a remote cache is a huge benefit for those developers. Consider that a typical developer’s
|
||||||
|
day begins by performing a code checkout. Most likely the checked out code has already been built on CI.
|
||||||
|
Therefore, no time is wasted running the first build of the day. The remote cache provides all of the
|
||||||
|
intermediate artifacts needed. And, in the event local changes are made, the remote cache still leverages
|
||||||
|
partial cache hits for projects which are independent. As other developers in the organization request
|
||||||
|
CI builds, the remote cache continues to populate, increasing the likelihood of these remote cache hits
|
||||||
|
across team members.
|
||||||
|
|
||||||
|
@@ -2,10 +2,11 @@ org.gradle.configuration-cache=false
|
|||||||
org.gradle.parallel=true
|
org.gradle.parallel=true
|
||||||
org.gradle.caching=true
|
org.gradle.caching=true
|
||||||
|
|
||||||
rbcs.version = 0.1.6
|
rbcs.version = 0.1.4
|
||||||
|
|
||||||
lys.version = 2025.02.08
|
lys.version = 2025.02.05
|
||||||
|
|
||||||
gitea.maven.url = https://gitea.woggioni.net/api/packages/woggioni/maven
|
gitea.maven.url = https://gitea.woggioni.net/api/packages/woggioni/maven
|
||||||
docker.registry.url=gitea.woggioni.net
|
docker.registry.url=gitea.woggioni.net
|
||||||
|
|
||||||
|
jpms-check.configurationName = runtimeClasspath
|
||||||
|
@@ -4,5 +4,4 @@ module net.woggioni.rbcs.api {
|
|||||||
requires io.netty.buffer;
|
requires io.netty.buffer;
|
||||||
exports net.woggioni.rbcs.api;
|
exports net.woggioni.rbcs.api;
|
||||||
exports net.woggioni.rbcs.api.exception;
|
exports net.woggioni.rbcs.api.exception;
|
||||||
exports net.woggioni.rbcs.api.event;
|
|
||||||
}
|
}
|
@@ -1,17 +1,14 @@
|
|||||||
package net.woggioni.rbcs.api;
|
package net.woggioni.rbcs.api;
|
||||||
|
|
||||||
import io.netty.buffer.ByteBufAllocator;
|
import io.netty.buffer.ByteBuf;
|
||||||
|
import net.woggioni.rbcs.api.exception.ContentTooLargeException;
|
||||||
|
|
||||||
|
import java.nio.channels.ReadableByteChannel;
|
||||||
import java.util.concurrent.CompletableFuture;
|
import java.util.concurrent.CompletableFuture;
|
||||||
|
|
||||||
|
|
||||||
public interface Cache extends AutoCloseable {
|
public interface Cache extends AutoCloseable {
|
||||||
|
CompletableFuture<ReadableByteChannel> get(String key);
|
||||||
|
|
||||||
default void get(String key, ResponseHandle responseHandle, ByteBufAllocator alloc) {
|
CompletableFuture<Void> put(String key, ByteBuf content) throws ContentTooLargeException;
|
||||||
throw new UnsupportedOperationException();
|
|
||||||
}
|
|
||||||
|
|
||||||
default CompletableFuture<RequestHandle> put(String key, ResponseHandle responseHandle, ByteBufAllocator alloc) {
|
|
||||||
throw new UnsupportedOperationException();
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
@@ -1,8 +0,0 @@
|
|||||||
package net.woggioni.rbcs.api;
|
|
||||||
|
|
||||||
import net.woggioni.rbcs.api.event.RequestStreamingEvent;
|
|
||||||
|
|
||||||
@FunctionalInterface
|
|
||||||
public interface RequestHandle {
|
|
||||||
void handleEvent(RequestStreamingEvent evt);
|
|
||||||
}
|
|
@@ -1,8 +0,0 @@
|
|||||||
package net.woggioni.rbcs.api;
|
|
||||||
|
|
||||||
import net.woggioni.rbcs.api.event.ResponseStreamingEvent;
|
|
||||||
|
|
||||||
@FunctionalInterface
|
|
||||||
public interface ResponseHandle {
|
|
||||||
void handleEvent(ResponseStreamingEvent evt);
|
|
||||||
}
|
|
@@ -1,26 +0,0 @@
|
|||||||
package net.woggioni.rbcs.api.event;
|
|
||||||
|
|
||||||
import io.netty.buffer.ByteBuf;
|
|
||||||
import lombok.Getter;
|
|
||||||
import lombok.RequiredArgsConstructor;
|
|
||||||
|
|
||||||
public sealed interface RequestStreamingEvent {
|
|
||||||
|
|
||||||
@Getter
|
|
||||||
@RequiredArgsConstructor
|
|
||||||
non-sealed class ChunkReceived implements RequestStreamingEvent {
|
|
||||||
private final ByteBuf chunk;
|
|
||||||
}
|
|
||||||
|
|
||||||
final class LastChunkReceived extends ChunkReceived {
|
|
||||||
public LastChunkReceived(ByteBuf chunk) {
|
|
||||||
super(chunk);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@Getter
|
|
||||||
@RequiredArgsConstructor
|
|
||||||
final class ExceptionCaught implements RequestStreamingEvent {
|
|
||||||
private final Throwable exception;
|
|
||||||
}
|
|
||||||
}
|
|
@@ -1,42 +0,0 @@
|
|||||||
package net.woggioni.rbcs.api.event;
|
|
||||||
|
|
||||||
import io.netty.buffer.ByteBuf;
|
|
||||||
import lombok.Getter;
|
|
||||||
import lombok.RequiredArgsConstructor;
|
|
||||||
|
|
||||||
import java.nio.channels.FileChannel;
|
|
||||||
|
|
||||||
public sealed interface ResponseStreamingEvent {
|
|
||||||
|
|
||||||
final class ResponseReceived implements ResponseStreamingEvent {
|
|
||||||
}
|
|
||||||
|
|
||||||
@Getter
|
|
||||||
@RequiredArgsConstructor
|
|
||||||
non-sealed class ChunkReceived implements ResponseStreamingEvent {
|
|
||||||
private final ByteBuf chunk;
|
|
||||||
}
|
|
||||||
|
|
||||||
@Getter
|
|
||||||
@RequiredArgsConstructor
|
|
||||||
non-sealed class FileReceived implements ResponseStreamingEvent {
|
|
||||||
private final FileChannel file;
|
|
||||||
}
|
|
||||||
|
|
||||||
final class LastChunkReceived extends ChunkReceived {
|
|
||||||
public LastChunkReceived(ByteBuf chunk) {
|
|
||||||
super(chunk);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
@Getter
|
|
||||||
@RequiredArgsConstructor
|
|
||||||
final class ExceptionCaught implements ResponseStreamingEvent {
|
|
||||||
private final Throwable exception;
|
|
||||||
}
|
|
||||||
|
|
||||||
final class NotFound implements ResponseStreamingEvent { }
|
|
||||||
|
|
||||||
NotFound NOT_FOUND = new NotFound();
|
|
||||||
ResponseReceived RESPONSE_RECEIVED = new ResponseReceived();
|
|
||||||
}
|
|
@@ -44,6 +44,7 @@ envelopeJar {
|
|||||||
dependencies {
|
dependencies {
|
||||||
implementation catalog.jwo
|
implementation catalog.jwo
|
||||||
implementation catalog.slf4j.api
|
implementation catalog.slf4j.api
|
||||||
|
implementation catalog.netty.codec.http
|
||||||
implementation catalog.picocli
|
implementation catalog.picocli
|
||||||
|
|
||||||
implementation project(':rbcs-client')
|
implementation project(':rbcs-client')
|
||||||
|
@@ -6,8 +6,6 @@ import net.woggioni.rbcs.common.contextLogger
|
|||||||
import net.woggioni.rbcs.common.error
|
import net.woggioni.rbcs.common.error
|
||||||
import net.woggioni.rbcs.common.info
|
import net.woggioni.rbcs.common.info
|
||||||
import net.woggioni.jwo.JWO
|
import net.woggioni.jwo.JWO
|
||||||
import net.woggioni.jwo.LongMath
|
|
||||||
import net.woggioni.rbcs.common.debug
|
|
||||||
import picocli.CommandLine
|
import picocli.CommandLine
|
||||||
import java.security.SecureRandom
|
import java.security.SecureRandom
|
||||||
import java.time.Duration
|
import java.time.Duration
|
||||||
@@ -48,7 +46,6 @@ class BenchmarkCommand : RbcsCommand() {
|
|||||||
clientCommand.configuration.profiles[profileName]
|
clientCommand.configuration.profiles[profileName]
|
||||||
?: throw IllegalArgumentException("Profile $profileName does not exist in configuration")
|
?: throw IllegalArgumentException("Profile $profileName does not exist in configuration")
|
||||||
}
|
}
|
||||||
val progressThreshold = LongMath.ceilDiv(numberOfEntries.toLong(), 20)
|
|
||||||
RemoteBuildCacheClient(profile).use { client ->
|
RemoteBuildCacheClient(profile).use { client ->
|
||||||
|
|
||||||
val entryGenerator = sequence {
|
val entryGenerator = sequence {
|
||||||
@@ -82,12 +79,7 @@ class BenchmarkCommand : RbcsCommand() {
|
|||||||
completionQueue.put(result)
|
completionQueue.put(result)
|
||||||
}
|
}
|
||||||
semaphore.release()
|
semaphore.release()
|
||||||
val completed = completionCounter.incrementAndGet()
|
completionCounter.incrementAndGet()
|
||||||
if(completed.mod(progressThreshold) == 0L) {
|
|
||||||
log.debug {
|
|
||||||
"Inserted $completed / $numberOfEntries"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
Thread.sleep(0)
|
Thread.sleep(0)
|
||||||
@@ -129,12 +121,7 @@ class BenchmarkCommand : RbcsCommand() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
future.whenComplete { _, _ ->
|
future.whenComplete { _, _ ->
|
||||||
val completed = completionCounter.incrementAndGet()
|
completionCounter.incrementAndGet()
|
||||||
if(completed.mod(progressThreshold) == 0L) {
|
|
||||||
log.debug {
|
|
||||||
"Retrieved $completed / ${entries.size}"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
semaphore.release()
|
semaphore.release()
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
|
@@ -6,11 +6,9 @@ plugins {
|
|||||||
dependencies {
|
dependencies {
|
||||||
implementation project(':rbcs-api')
|
implementation project(':rbcs-api')
|
||||||
implementation project(':rbcs-common')
|
implementation project(':rbcs-common')
|
||||||
|
implementation catalog.picocli
|
||||||
implementation catalog.slf4j.api
|
implementation catalog.slf4j.api
|
||||||
implementation catalog.netty.buffer
|
implementation catalog.netty.buffer
|
||||||
implementation catalog.netty.handler
|
|
||||||
implementation catalog.netty.transport
|
|
||||||
implementation catalog.netty.common
|
|
||||||
implementation catalog.netty.codec.http
|
implementation catalog.netty.codec.http
|
||||||
|
|
||||||
testRuntimeOnly catalog.logback.classic
|
testRuntimeOnly catalog.logback.classic
|
||||||
|
@@ -45,9 +45,9 @@ import java.time.Duration
|
|||||||
import java.util.Base64
|
import java.util.Base64
|
||||||
import java.util.concurrent.CompletableFuture
|
import java.util.concurrent.CompletableFuture
|
||||||
import java.util.concurrent.atomic.AtomicInteger
|
import java.util.concurrent.atomic.AtomicInteger
|
||||||
import kotlin.random.Random
|
|
||||||
import io.netty.util.concurrent.Future as NettyFuture
|
import io.netty.util.concurrent.Future as NettyFuture
|
||||||
|
|
||||||
|
|
||||||
class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoCloseable {
|
class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoCloseable {
|
||||||
private val group: NioEventLoopGroup
|
private val group: NioEventLoopGroup
|
||||||
private var sslContext: SslContext
|
private var sslContext: SslContext
|
||||||
@@ -206,7 +206,6 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
|
|||||||
retryPolicy.initialDelayMillis.toDouble(),
|
retryPolicy.initialDelayMillis.toDouble(),
|
||||||
retryPolicy.exp,
|
retryPolicy.exp,
|
||||||
outcomeHandler,
|
outcomeHandler,
|
||||||
Random.Default,
|
|
||||||
operation
|
operation
|
||||||
)
|
)
|
||||||
} else {
|
} else {
|
||||||
|
@@ -3,8 +3,6 @@ package net.woggioni.rbcs.client
|
|||||||
import io.netty.util.concurrent.EventExecutorGroup
|
import io.netty.util.concurrent.EventExecutorGroup
|
||||||
import java.util.concurrent.CompletableFuture
|
import java.util.concurrent.CompletableFuture
|
||||||
import java.util.concurrent.TimeUnit
|
import java.util.concurrent.TimeUnit
|
||||||
import kotlin.math.pow
|
|
||||||
import kotlin.random.Random
|
|
||||||
|
|
||||||
sealed class OperationOutcome<T> {
|
sealed class OperationOutcome<T> {
|
||||||
class Success<T>(val result: T) : OperationOutcome<T>()
|
class Success<T>(val result: T) : OperationOutcome<T>()
|
||||||
@@ -26,10 +24,8 @@ fun <T> executeWithRetry(
|
|||||||
initialDelay: Double,
|
initialDelay: Double,
|
||||||
exp: Double,
|
exp: Double,
|
||||||
outcomeHandler: OutcomeHandler<T>,
|
outcomeHandler: OutcomeHandler<T>,
|
||||||
randomizer : Random?,
|
|
||||||
cb: () -> CompletableFuture<T>
|
cb: () -> CompletableFuture<T>
|
||||||
): CompletableFuture<T> {
|
): CompletableFuture<T> {
|
||||||
|
|
||||||
val finalResult = cb()
|
val finalResult = cb()
|
||||||
var future = finalResult
|
var future = finalResult
|
||||||
var shortCircuit = false
|
var shortCircuit = false
|
||||||
@@ -50,7 +46,7 @@ fun <T> executeWithRetry(
|
|||||||
is OutcomeHandlerResult.Retry -> {
|
is OutcomeHandlerResult.Retry -> {
|
||||||
val res = CompletableFuture<T>()
|
val res = CompletableFuture<T>()
|
||||||
val delay = run {
|
val delay = run {
|
||||||
val scheduledDelay = (initialDelay * exp.pow(i.toDouble()) * (1.0 + (randomizer?.nextDouble(-0.5, 0.5) ?: 0.0))).toLong()
|
val scheduledDelay = (initialDelay * Math.pow(exp, i.toDouble())).toLong()
|
||||||
outcomeHandlerResult.suggestedDelayMillis?.coerceAtMost(scheduledDelay) ?: scheduledDelay
|
outcomeHandlerResult.suggestedDelayMillis?.coerceAtMost(scheduledDelay) ?: scheduledDelay
|
||||||
}
|
}
|
||||||
eventExecutorGroup.schedule({
|
eventExecutorGroup.schedule({
|
||||||
|
@@ -89,7 +89,7 @@ class RetryTest {
|
|||||||
val random = Random(testArgs.seed)
|
val random = Random(testArgs.seed)
|
||||||
|
|
||||||
val future =
|
val future =
|
||||||
executeWithRetry(executor, testArgs.maxAttempt, testArgs.initialDelay, testArgs.exp, outcomeHandler, null) {
|
executeWithRetry(executor, testArgs.maxAttempt, testArgs.initialDelay, testArgs.exp, outcomeHandler) {
|
||||||
val now = System.nanoTime()
|
val now = System.nanoTime()
|
||||||
val result = CompletableFuture<Int>()
|
val result = CompletableFuture<Int>()
|
||||||
executor.submit {
|
executor.submit {
|
||||||
@@ -129,7 +129,7 @@ class RetryTest {
|
|||||||
previousAttempt.first + testArgs.initialDelay * Math.pow(testArgs.exp, index.toDouble()) * 1e6
|
previousAttempt.first + testArgs.initialDelay * Math.pow(testArgs.exp, index.toDouble()) * 1e6
|
||||||
val actualTimestamp = timestamp
|
val actualTimestamp = timestamp
|
||||||
val err = Math.abs(expectedTimestamp - actualTimestamp) / expectedTimestamp
|
val err = Math.abs(expectedTimestamp - actualTimestamp) / expectedTimestamp
|
||||||
Assertions.assertTrue(err < 1e-2)
|
Assertions.assertTrue(err < 1e-3)
|
||||||
}
|
}
|
||||||
if (index == attempts.size - 1 && index < testArgs.maxAttempt - 1) {
|
if (index == attempts.size - 1 && index < testArgs.maxAttempt - 1) {
|
||||||
/*
|
/*
|
||||||
|
@@ -1,15 +0,0 @@
|
|||||||
package net.woggioni.rbcs.common
|
|
||||||
|
|
||||||
import io.netty.buffer.ByteBuf
|
|
||||||
import io.netty.buffer.ByteBufAllocator
|
|
||||||
import io.netty.buffer.CompositeByteBuf
|
|
||||||
|
|
||||||
fun extractChunk(buf: CompositeByteBuf, alloc: ByteBufAllocator): ByteBuf {
|
|
||||||
val chunk = alloc.compositeBuffer()
|
|
||||||
for (component in buf.decompose(0, buf.readableBytes())) {
|
|
||||||
chunk.addComponent(true, component.retain())
|
|
||||||
}
|
|
||||||
buf.removeComponents(0, buf.numComponents())
|
|
||||||
buf.clear()
|
|
||||||
return chunk
|
|
||||||
}
|
|
@@ -12,24 +12,6 @@ object RBCS {
|
|||||||
const val RBCS_PREFIX: String = "rbcs"
|
const val RBCS_PREFIX: String = "rbcs"
|
||||||
const val XML_SCHEMA_NAMESPACE_URI = "http://www.w3.org/2001/XMLSchema-instance"
|
const val XML_SCHEMA_NAMESPACE_URI = "http://www.w3.org/2001/XMLSchema-instance"
|
||||||
|
|
||||||
fun ByteArray.toInt(index : Int = 0) : Long {
|
|
||||||
if(index + 4 > size) throw IllegalArgumentException("Not enough bytes to decode a 32 bits integer")
|
|
||||||
var value : Long = 0
|
|
||||||
for (b in index until index + 4) {
|
|
||||||
value = (value shl 8) + (get(b).toInt() and 0xFF)
|
|
||||||
}
|
|
||||||
return value
|
|
||||||
}
|
|
||||||
|
|
||||||
fun ByteArray.toLong(index : Int = 0) : Long {
|
|
||||||
if(index + 8 > size) throw IllegalArgumentException("Not enough bytes to decode a 64 bits long integer")
|
|
||||||
var value : Long = 0
|
|
||||||
for (b in index until index + 8) {
|
|
||||||
value = (value shl 8) + (get(b).toInt() and 0xFF)
|
|
||||||
}
|
|
||||||
return value
|
|
||||||
}
|
|
||||||
|
|
||||||
fun digest(
|
fun digest(
|
||||||
data: ByteArray,
|
data: ByteArray,
|
||||||
md: MessageDigest = MessageDigest.getInstance("MD5")
|
md: MessageDigest = MessageDigest.getInstance("MD5")
|
||||||
|
@@ -34,7 +34,6 @@ dependencies {
|
|||||||
implementation catalog.jwo
|
implementation catalog.jwo
|
||||||
implementation catalog.slf4j.api
|
implementation catalog.slf4j.api
|
||||||
implementation catalog.netty.common
|
implementation catalog.netty.common
|
||||||
implementation catalog.netty.handler
|
|
||||||
implementation catalog.netty.codec.memcache
|
implementation catalog.netty.codec.memcache
|
||||||
|
|
||||||
bundle catalog.netty.codec.memcache
|
bundle catalog.netty.codec.memcache
|
||||||
|
@@ -11,7 +11,6 @@ module net.woggioni.rbcs.server.memcache {
|
|||||||
requires io.netty.codec.memcache;
|
requires io.netty.codec.memcache;
|
||||||
requires io.netty.common;
|
requires io.netty.common;
|
||||||
requires io.netty.buffer;
|
requires io.netty.buffer;
|
||||||
requires io.netty.handler;
|
|
||||||
requires org.slf4j;
|
requires org.slf4j;
|
||||||
|
|
||||||
provides CacheProvider with net.woggioni.rbcs.server.memcache.MemcacheCacheProvider;
|
provides CacheProvider with net.woggioni.rbcs.server.memcache.MemcacheCacheProvider;
|
||||||
|
@@ -1,232 +1,20 @@
|
|||||||
package net.woggioni.rbcs.server.memcache
|
package net.woggioni.rbcs.server.memcache
|
||||||
|
|
||||||
import io.netty.buffer.ByteBufAllocator
|
import io.netty.buffer.ByteBuf
|
||||||
import io.netty.buffer.Unpooled
|
|
||||||
import io.netty.handler.codec.memcache.binary.BinaryMemcacheOpcodes
|
|
||||||
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponseStatus
|
|
||||||
import io.netty.handler.codec.memcache.binary.DefaultBinaryMemcacheRequest
|
|
||||||
import net.woggioni.rbcs.api.Cache
|
import net.woggioni.rbcs.api.Cache
|
||||||
import net.woggioni.rbcs.api.RequestHandle
|
|
||||||
import net.woggioni.rbcs.api.ResponseHandle
|
|
||||||
import net.woggioni.rbcs.api.event.RequestStreamingEvent
|
|
||||||
import net.woggioni.rbcs.api.event.ResponseStreamingEvent
|
|
||||||
import net.woggioni.rbcs.api.exception.ContentTooLargeException
|
|
||||||
import net.woggioni.rbcs.common.ByteBufOutputStream
|
|
||||||
import net.woggioni.rbcs.common.RBCS.digest
|
|
||||||
import net.woggioni.rbcs.common.contextLogger
|
|
||||||
import net.woggioni.rbcs.common.debug
|
|
||||||
import net.woggioni.rbcs.common.extractChunk
|
|
||||||
import net.woggioni.rbcs.server.memcache.client.MemcacheClient
|
import net.woggioni.rbcs.server.memcache.client.MemcacheClient
|
||||||
import net.woggioni.rbcs.server.memcache.client.MemcacheResponseHandle
|
import java.nio.channels.ReadableByteChannel
|
||||||
import net.woggioni.rbcs.server.memcache.client.StreamingRequestEvent
|
|
||||||
import net.woggioni.rbcs.server.memcache.client.StreamingResponseEvent
|
|
||||||
import java.security.MessageDigest
|
|
||||||
import java.time.Duration
|
|
||||||
import java.time.Instant
|
|
||||||
import java.util.concurrent.CompletableFuture
|
import java.util.concurrent.CompletableFuture
|
||||||
import java.util.zip.Deflater
|
|
||||||
import java.util.zip.DeflaterOutputStream
|
|
||||||
import java.util.zip.Inflater
|
|
||||||
import java.util.zip.InflaterOutputStream
|
|
||||||
|
|
||||||
class MemcacheCache(private val cfg: MemcacheCacheConfiguration) : Cache {
|
|
||||||
|
|
||||||
companion object {
|
|
||||||
@JvmStatic
|
|
||||||
private val log = contextLogger()
|
|
||||||
}
|
|
||||||
|
|
||||||
|
class MemcacheCache(private val cfg : MemcacheCacheConfiguration) : Cache {
|
||||||
private val memcacheClient = MemcacheClient(cfg)
|
private val memcacheClient = MemcacheClient(cfg)
|
||||||
|
|
||||||
override fun get(key: String, responseHandle: ResponseHandle, alloc: ByteBufAllocator) {
|
override fun get(key: String): CompletableFuture<ReadableByteChannel?> {
|
||||||
val compressionMode = cfg.compressionMode
|
return memcacheClient.get(key)
|
||||||
val buf = alloc.compositeBuffer()
|
|
||||||
val stream = ByteBufOutputStream(buf).let { outputStream ->
|
|
||||||
if (compressionMode != null) {
|
|
||||||
when (compressionMode) {
|
|
||||||
MemcacheCacheConfiguration.CompressionMode.DEFLATE -> {
|
|
||||||
InflaterOutputStream(
|
|
||||||
outputStream,
|
|
||||||
Inflater()
|
|
||||||
)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
outputStream
|
|
||||||
}
|
|
||||||
}
|
|
||||||
val memcacheResponseHandle = object : MemcacheResponseHandle {
|
|
||||||
override fun handleEvent(evt: StreamingResponseEvent) {
|
|
||||||
when (evt) {
|
|
||||||
is StreamingResponseEvent.ResponseReceived -> {
|
|
||||||
if (evt.response.status() == BinaryMemcacheResponseStatus.SUCCESS) {
|
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.RESPONSE_RECEIVED)
|
|
||||||
} else if (evt.response.status() == BinaryMemcacheResponseStatus.KEY_ENOENT) {
|
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.NOT_FOUND)
|
|
||||||
} else {
|
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.ExceptionCaught(MemcacheException(evt.response.status())))
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
is StreamingResponseEvent.LastContentReceived -> {
|
override fun put(key: String, content: ByteBuf): CompletableFuture<Void> {
|
||||||
evt.content.content().let { content ->
|
return memcacheClient.put(key, content, cfg.maxAge)
|
||||||
content.readBytes(stream, content.readableBytes())
|
|
||||||
}
|
|
||||||
buf.retain()
|
|
||||||
stream.close()
|
|
||||||
val chunk = extractChunk(buf, alloc)
|
|
||||||
buf.release()
|
|
||||||
responseHandle.handleEvent(
|
|
||||||
ResponseStreamingEvent.LastChunkReceived(
|
|
||||||
chunk
|
|
||||||
)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
is StreamingResponseEvent.ContentReceived -> {
|
|
||||||
evt.content.content().let { content ->
|
|
||||||
content.readBytes(stream, content.readableBytes())
|
|
||||||
}
|
|
||||||
if (buf.readableBytes() >= cfg.chunkSize) {
|
|
||||||
val chunk = extractChunk(buf, alloc)
|
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.ChunkReceived(chunk))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
is StreamingResponseEvent.ExceptionCaught -> {
|
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.ExceptionCaught(evt.exception))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
memcacheClient.sendRequest(Unpooled.wrappedBuffer(key.toByteArray()), memcacheResponseHandle)
|
|
||||||
.thenApply { memcacheRequestHandle ->
|
|
||||||
val request = (cfg.digestAlgorithm
|
|
||||||
?.let(MessageDigest::getInstance)
|
|
||||||
?.let { md ->
|
|
||||||
digest(key.toByteArray(), md)
|
|
||||||
} ?: key.toByteArray(Charsets.UTF_8)
|
|
||||||
).let { digest ->
|
|
||||||
DefaultBinaryMemcacheRequest(Unpooled.wrappedBuffer(digest)).apply {
|
|
||||||
setOpcode(BinaryMemcacheOpcodes.GET)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
memcacheRequestHandle.handleEvent(StreamingRequestEvent.SendRequest(request))
|
|
||||||
}.exceptionally { ex ->
|
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.ExceptionCaught(ex))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private fun encodeExpiry(expiry: Duration): Int {
|
|
||||||
val expirySeconds = expiry.toSeconds()
|
|
||||||
return expirySeconds.toInt().takeIf { it.toLong() == expirySeconds }
|
|
||||||
?: Instant.ofEpochSecond(expirySeconds).epochSecond.toInt()
|
|
||||||
}
|
|
||||||
|
|
||||||
override fun put(
|
|
||||||
key: String,
|
|
||||||
responseHandle: ResponseHandle,
|
|
||||||
alloc: ByteBufAllocator
|
|
||||||
): CompletableFuture<RequestHandle> {
|
|
||||||
val memcacheResponseHandle = object : MemcacheResponseHandle {
|
|
||||||
override fun handleEvent(evt: StreamingResponseEvent) {
|
|
||||||
when (evt) {
|
|
||||||
is StreamingResponseEvent.ResponseReceived -> {
|
|
||||||
when (evt.response.status()) {
|
|
||||||
BinaryMemcacheResponseStatus.SUCCESS -> {
|
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.RESPONSE_RECEIVED)
|
|
||||||
}
|
|
||||||
|
|
||||||
BinaryMemcacheResponseStatus.KEY_ENOENT -> {
|
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.NOT_FOUND)
|
|
||||||
}
|
|
||||||
|
|
||||||
BinaryMemcacheResponseStatus.E2BIG -> {
|
|
||||||
responseHandle.handleEvent(
|
|
||||||
ResponseStreamingEvent.ExceptionCaught(
|
|
||||||
ContentTooLargeException("Request payload is too big", null)
|
|
||||||
)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
else -> {
|
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.ExceptionCaught(MemcacheException(evt.response.status())))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
is StreamingResponseEvent.LastContentReceived -> {
|
|
||||||
responseHandle.handleEvent(
|
|
||||||
ResponseStreamingEvent.LastChunkReceived(
|
|
||||||
evt.content.content().retain()
|
|
||||||
)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
is StreamingResponseEvent.ContentReceived -> {
|
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.ChunkReceived(evt.content.content().retain()))
|
|
||||||
}
|
|
||||||
|
|
||||||
is StreamingResponseEvent.ExceptionCaught -> {
|
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.ExceptionCaught(evt.exception))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
val result: CompletableFuture<RequestHandle> =
|
|
||||||
memcacheClient.sendRequest(Unpooled.wrappedBuffer(key.toByteArray()), memcacheResponseHandle)
|
|
||||||
.thenApply { memcacheRequestHandle ->
|
|
||||||
val request = (cfg.digestAlgorithm
|
|
||||||
?.let(MessageDigest::getInstance)
|
|
||||||
?.let { md ->
|
|
||||||
digest(key.toByteArray(), md)
|
|
||||||
} ?: key.toByteArray(Charsets.UTF_8)).let { digest ->
|
|
||||||
val extras = Unpooled.buffer(8, 8)
|
|
||||||
extras.writeInt(0)
|
|
||||||
extras.writeInt(encodeExpiry(cfg.maxAge))
|
|
||||||
DefaultBinaryMemcacheRequest(Unpooled.wrappedBuffer(digest), extras).apply {
|
|
||||||
setOpcode(BinaryMemcacheOpcodes.SET)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// memcacheRequestHandle.handleEvent(StreamingRequestEvent.SendRequest(request))
|
|
||||||
val compressionMode = cfg.compressionMode
|
|
||||||
val buf = alloc.heapBuffer()
|
|
||||||
val stream = ByteBufOutputStream(buf).let { outputStream ->
|
|
||||||
if (compressionMode != null) {
|
|
||||||
when (compressionMode) {
|
|
||||||
MemcacheCacheConfiguration.CompressionMode.DEFLATE -> {
|
|
||||||
DeflaterOutputStream(
|
|
||||||
outputStream,
|
|
||||||
Deflater(Deflater.DEFAULT_COMPRESSION, false)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
outputStream
|
|
||||||
}
|
|
||||||
}
|
|
||||||
RequestHandle { evt ->
|
|
||||||
when (evt) {
|
|
||||||
is RequestStreamingEvent.LastChunkReceived -> {
|
|
||||||
evt.chunk.readBytes(stream, evt.chunk.readableBytes())
|
|
||||||
buf.retain()
|
|
||||||
stream.close()
|
|
||||||
request.setTotalBodyLength(buf.readableBytes() + request.keyLength() + request.extrasLength())
|
|
||||||
memcacheRequestHandle.handleEvent(StreamingRequestEvent.SendRequest(request))
|
|
||||||
memcacheRequestHandle.handleEvent(StreamingRequestEvent.SendLastChunk(buf))
|
|
||||||
}
|
|
||||||
|
|
||||||
is RequestStreamingEvent.ChunkReceived -> {
|
|
||||||
evt.chunk.readBytes(stream, evt.chunk.readableBytes())
|
|
||||||
}
|
|
||||||
|
|
||||||
is RequestStreamingEvent.ExceptionCaught -> {
|
|
||||||
stream.close()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return result
|
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun close() {
|
override fun close() {
|
||||||
|
@@ -10,10 +10,14 @@ data class MemcacheCacheConfiguration(
|
|||||||
val maxSize: Int = 0x100000,
|
val maxSize: Int = 0x100000,
|
||||||
val digestAlgorithm: String? = null,
|
val digestAlgorithm: String? = null,
|
||||||
val compressionMode: CompressionMode? = null,
|
val compressionMode: CompressionMode? = null,
|
||||||
val chunkSize : Int
|
|
||||||
) : Configuration.Cache {
|
) : Configuration.Cache {
|
||||||
|
|
||||||
enum class CompressionMode {
|
enum class CompressionMode {
|
||||||
|
/**
|
||||||
|
* Gzip mode
|
||||||
|
*/
|
||||||
|
GZIP,
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Deflate mode
|
* Deflate mode
|
||||||
*/
|
*/
|
||||||
|
@@ -29,14 +29,12 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
|
|||||||
?.let(Duration::parse)
|
?.let(Duration::parse)
|
||||||
?: Duration.ofDays(1)
|
?: Duration.ofDays(1)
|
||||||
val maxSize = el.renderAttribute("max-size")
|
val maxSize = el.renderAttribute("max-size")
|
||||||
?.let(Integer::decode)
|
?.let(String::toInt)
|
||||||
?: 0x100000
|
?: 0x100000
|
||||||
val chunkSize = el.renderAttribute("chunk-size")
|
|
||||||
?.let(Integer::decode)
|
|
||||||
?: 0x4000
|
|
||||||
val compressionMode = el.renderAttribute("compression-mode")
|
val compressionMode = el.renderAttribute("compression-mode")
|
||||||
?.let {
|
?.let {
|
||||||
when (it) {
|
when (it) {
|
||||||
|
"gzip" -> MemcacheCacheConfiguration.CompressionMode.GZIP
|
||||||
"deflate" -> MemcacheCacheConfiguration.CompressionMode.DEFLATE
|
"deflate" -> MemcacheCacheConfiguration.CompressionMode.DEFLATE
|
||||||
else -> MemcacheCacheConfiguration.CompressionMode.DEFLATE
|
else -> MemcacheCacheConfiguration.CompressionMode.DEFLATE
|
||||||
}
|
}
|
||||||
@@ -65,7 +63,6 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
|
|||||||
maxSize,
|
maxSize,
|
||||||
digestAlgorithm,
|
digestAlgorithm,
|
||||||
compressionMode,
|
compressionMode,
|
||||||
chunkSize
|
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -73,6 +70,7 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
|
|||||||
val result = doc.createElement("cache")
|
val result = doc.createElement("cache")
|
||||||
Xml.of(doc, result) {
|
Xml.of(doc, result) {
|
||||||
attr("xmlns:${xmlNamespacePrefix}", xmlNamespace, namespaceURI = "http://www.w3.org/2000/xmlns/")
|
attr("xmlns:${xmlNamespacePrefix}", xmlNamespace, namespaceURI = "http://www.w3.org/2000/xmlns/")
|
||||||
|
|
||||||
attr("xs:type", "${xmlNamespacePrefix}:$xmlType", RBCS.XML_SCHEMA_NAMESPACE_URI)
|
attr("xs:type", "${xmlNamespacePrefix}:$xmlType", RBCS.XML_SCHEMA_NAMESPACE_URI)
|
||||||
for (server in servers) {
|
for (server in servers) {
|
||||||
node("server") {
|
node("server") {
|
||||||
@@ -86,13 +84,13 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
|
|||||||
}
|
}
|
||||||
attr("max-age", maxAge.toString())
|
attr("max-age", maxAge.toString())
|
||||||
attr("max-size", maxSize.toString())
|
attr("max-size", maxSize.toString())
|
||||||
attr("chunk-size", chunkSize.toString())
|
|
||||||
digestAlgorithm?.let { digestAlgorithm ->
|
digestAlgorithm?.let { digestAlgorithm ->
|
||||||
attr("digest", digestAlgorithm)
|
attr("digest", digestAlgorithm)
|
||||||
}
|
}
|
||||||
compressionMode?.let { compressionMode ->
|
compressionMode?.let { compressionMode ->
|
||||||
attr(
|
attr(
|
||||||
"compression-mode", when (compressionMode) {
|
"compression-mode", when (compressionMode) {
|
||||||
|
MemcacheCacheConfiguration.CompressionMode.GZIP -> "gzip"
|
||||||
MemcacheCacheConfiguration.CompressionMode.DEFLATE -> "deflate"
|
MemcacheCacheConfiguration.CompressionMode.DEFLATE -> "deflate"
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
@@ -1,30 +0,0 @@
|
|||||||
package net.woggioni.rbcs.server.memcache.client
|
|
||||||
|
|
||||||
import io.netty.buffer.ByteBuf
|
|
||||||
import io.netty.handler.codec.memcache.LastMemcacheContent
|
|
||||||
import io.netty.handler.codec.memcache.MemcacheContent
|
|
||||||
import io.netty.handler.codec.memcache.binary.BinaryMemcacheRequest
|
|
||||||
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponse
|
|
||||||
|
|
||||||
sealed interface StreamingRequestEvent {
|
|
||||||
class SendRequest(val request : BinaryMemcacheRequest) : StreamingRequestEvent
|
|
||||||
open class SendChunk(val chunk : ByteBuf) : StreamingRequestEvent
|
|
||||||
class SendLastChunk(chunk : ByteBuf) : SendChunk(chunk)
|
|
||||||
class ExceptionCaught(val exception : Throwable) : StreamingRequestEvent
|
|
||||||
}
|
|
||||||
|
|
||||||
sealed interface StreamingResponseEvent {
|
|
||||||
class ResponseReceived(val response : BinaryMemcacheResponse) : StreamingResponseEvent
|
|
||||||
open class ContentReceived(val content : MemcacheContent) : StreamingResponseEvent
|
|
||||||
class LastContentReceived(val lastContent : LastMemcacheContent) : ContentReceived(lastContent)
|
|
||||||
class ExceptionCaught(val exception : Throwable) : StreamingResponseEvent
|
|
||||||
}
|
|
||||||
|
|
||||||
interface MemcacheRequestHandle {
|
|
||||||
fun handleEvent(evt : StreamingRequestEvent)
|
|
||||||
}
|
|
||||||
|
|
||||||
interface MemcacheResponseHandle {
|
|
||||||
fun handleEvent(evt : StreamingResponseEvent)
|
|
||||||
}
|
|
||||||
|
|
@@ -3,6 +3,7 @@ package net.woggioni.rbcs.server.memcache.client
|
|||||||
|
|
||||||
import io.netty.bootstrap.Bootstrap
|
import io.netty.bootstrap.Bootstrap
|
||||||
import io.netty.buffer.ByteBuf
|
import io.netty.buffer.ByteBuf
|
||||||
|
import io.netty.buffer.Unpooled
|
||||||
import io.netty.channel.Channel
|
import io.netty.channel.Channel
|
||||||
import io.netty.channel.ChannelHandlerContext
|
import io.netty.channel.ChannelHandlerContext
|
||||||
import io.netty.channel.ChannelOption
|
import io.netty.channel.ChannelOption
|
||||||
@@ -13,23 +14,36 @@ import io.netty.channel.pool.AbstractChannelPoolHandler
|
|||||||
import io.netty.channel.pool.ChannelPool
|
import io.netty.channel.pool.ChannelPool
|
||||||
import io.netty.channel.pool.FixedChannelPool
|
import io.netty.channel.pool.FixedChannelPool
|
||||||
import io.netty.channel.socket.nio.NioSocketChannel
|
import io.netty.channel.socket.nio.NioSocketChannel
|
||||||
import io.netty.handler.codec.memcache.DefaultLastMemcacheContent
|
import io.netty.handler.codec.DecoderException
|
||||||
import io.netty.handler.codec.memcache.DefaultMemcacheContent
|
|
||||||
import io.netty.handler.codec.memcache.LastMemcacheContent
|
|
||||||
import io.netty.handler.codec.memcache.MemcacheContent
|
|
||||||
import io.netty.handler.codec.memcache.MemcacheObject
|
|
||||||
import io.netty.handler.codec.memcache.binary.BinaryMemcacheClientCodec
|
import io.netty.handler.codec.memcache.binary.BinaryMemcacheClientCodec
|
||||||
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponse
|
import io.netty.handler.codec.memcache.binary.BinaryMemcacheObjectAggregator
|
||||||
import io.netty.handler.logging.LoggingHandler
|
import io.netty.handler.codec.memcache.binary.BinaryMemcacheOpcodes
|
||||||
|
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponseStatus
|
||||||
|
import io.netty.handler.codec.memcache.binary.DefaultFullBinaryMemcacheRequest
|
||||||
|
import io.netty.handler.codec.memcache.binary.FullBinaryMemcacheRequest
|
||||||
|
import io.netty.handler.codec.memcache.binary.FullBinaryMemcacheResponse
|
||||||
import io.netty.util.concurrent.GenericFutureListener
|
import io.netty.util.concurrent.GenericFutureListener
|
||||||
|
import net.woggioni.rbcs.common.ByteBufInputStream
|
||||||
|
import net.woggioni.rbcs.common.ByteBufOutputStream
|
||||||
|
import net.woggioni.rbcs.common.RBCS.digest
|
||||||
import net.woggioni.rbcs.common.HostAndPort
|
import net.woggioni.rbcs.common.HostAndPort
|
||||||
import net.woggioni.rbcs.common.contextLogger
|
import net.woggioni.rbcs.common.contextLogger
|
||||||
import net.woggioni.rbcs.common.debug
|
|
||||||
import net.woggioni.rbcs.server.memcache.MemcacheCacheConfiguration
|
import net.woggioni.rbcs.server.memcache.MemcacheCacheConfiguration
|
||||||
|
import net.woggioni.rbcs.server.memcache.MemcacheException
|
||||||
|
import net.woggioni.jwo.JWO
|
||||||
import java.net.InetSocketAddress
|
import java.net.InetSocketAddress
|
||||||
|
import java.nio.channels.Channels
|
||||||
|
import java.nio.channels.ReadableByteChannel
|
||||||
|
import java.security.MessageDigest
|
||||||
|
import java.time.Duration
|
||||||
|
import java.time.Instant
|
||||||
import java.util.concurrent.CompletableFuture
|
import java.util.concurrent.CompletableFuture
|
||||||
import java.util.concurrent.ConcurrentHashMap
|
import java.util.concurrent.ConcurrentHashMap
|
||||||
import java.util.concurrent.atomic.AtomicLong
|
import java.util.zip.Deflater
|
||||||
|
import java.util.zip.DeflaterOutputStream
|
||||||
|
import java.util.zip.GZIPInputStream
|
||||||
|
import java.util.zip.GZIPOutputStream
|
||||||
|
import java.util.zip.InflaterInputStream
|
||||||
import io.netty.util.concurrent.Future as NettyFuture
|
import io.netty.util.concurrent.Future as NettyFuture
|
||||||
|
|
||||||
|
|
||||||
@@ -47,8 +61,6 @@ class MemcacheClient(private val cfg: MemcacheCacheConfiguration) : AutoCloseabl
|
|||||||
group = NioEventLoopGroup()
|
group = NioEventLoopGroup()
|
||||||
}
|
}
|
||||||
|
|
||||||
private val counter = AtomicLong(0)
|
|
||||||
|
|
||||||
private fun newConnectionPool(server: MemcacheCacheConfiguration.Server): FixedChannelPool {
|
private fun newConnectionPool(server: MemcacheCacheConfiguration.Server): FixedChannelPool {
|
||||||
val bootstrap = Bootstrap().apply {
|
val bootstrap = Bootstrap().apply {
|
||||||
group(group)
|
group(group)
|
||||||
@@ -64,15 +76,18 @@ class MemcacheClient(private val cfg: MemcacheCacheConfiguration) : AutoCloseabl
|
|||||||
override fun channelCreated(ch: Channel) {
|
override fun channelCreated(ch: Channel) {
|
||||||
val pipeline: ChannelPipeline = ch.pipeline()
|
val pipeline: ChannelPipeline = ch.pipeline()
|
||||||
pipeline.addLast(BinaryMemcacheClientCodec())
|
pipeline.addLast(BinaryMemcacheClientCodec())
|
||||||
pipeline.addLast(LoggingHandler())
|
pipeline.addLast(BinaryMemcacheObjectAggregator(cfg.maxSize))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return FixedChannelPool(bootstrap, channelPoolHandler, server.maxConnections)
|
return FixedChannelPool(bootstrap, channelPoolHandler, server.maxConnections)
|
||||||
}
|
}
|
||||||
|
|
||||||
fun sendRequest(key: ByteBuf, responseHandle: MemcacheResponseHandle): CompletableFuture<MemcacheRequestHandle> {
|
|
||||||
|
private fun sendRequest(request: FullBinaryMemcacheRequest): CompletableFuture<FullBinaryMemcacheResponse> {
|
||||||
|
|
||||||
val server = cfg.servers.let { servers ->
|
val server = cfg.servers.let { servers ->
|
||||||
if (servers.size > 1) {
|
if (servers.size > 1) {
|
||||||
|
val key = request.key().duplicate()
|
||||||
var checksum = 0
|
var checksum = 0
|
||||||
while (key.readableBytes() > 4) {
|
while (key.readableBytes() > 4) {
|
||||||
val byte = key.readInt()
|
val byte = key.readInt()
|
||||||
@@ -88,7 +103,7 @@ class MemcacheClient(private val cfg: MemcacheCacheConfiguration) : AutoCloseabl
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
val response = CompletableFuture<MemcacheRequestHandle>()
|
val response = CompletableFuture<FullBinaryMemcacheResponse>()
|
||||||
// Custom handler for processing responses
|
// Custom handler for processing responses
|
||||||
val pool = connectionPool.computeIfAbsent(server.endpoint) {
|
val pool = connectionPool.computeIfAbsent(server.endpoint) {
|
||||||
newConnectionPool(server)
|
newConnectionPool(server)
|
||||||
@@ -98,73 +113,31 @@ class MemcacheClient(private val cfg: MemcacheCacheConfiguration) : AutoCloseabl
|
|||||||
if (channelFuture.isSuccess) {
|
if (channelFuture.isSuccess) {
|
||||||
val channel = channelFuture.now
|
val channel = channelFuture.now
|
||||||
val pipeline = channel.pipeline()
|
val pipeline = channel.pipeline()
|
||||||
val handler = object : SimpleChannelInboundHandler<MemcacheObject>() {
|
channel.pipeline()
|
||||||
|
.addLast("client-handler", object : SimpleChannelInboundHandler<FullBinaryMemcacheResponse>() {
|
||||||
override fun channelRead0(
|
override fun channelRead0(
|
||||||
ctx: ChannelHandlerContext,
|
ctx: ChannelHandlerContext,
|
||||||
msg: MemcacheObject
|
msg: FullBinaryMemcacheResponse
|
||||||
) {
|
) {
|
||||||
when (msg) {
|
|
||||||
is BinaryMemcacheResponse -> responseHandle.handleEvent(
|
|
||||||
StreamingResponseEvent.ResponseReceived(
|
|
||||||
msg
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
is LastMemcacheContent -> {
|
|
||||||
responseHandle.handleEvent(
|
|
||||||
StreamingResponseEvent.LastContentReceived(
|
|
||||||
msg
|
|
||||||
)
|
|
||||||
)
|
|
||||||
pipeline.removeLast()
|
pipeline.removeLast()
|
||||||
pool.release(channel)
|
pool.release(channel)
|
||||||
}
|
msg.touch("The method's caller must remember to release this")
|
||||||
|
response.complete(msg.retain())
|
||||||
is MemcacheContent -> responseHandle.handleEvent(
|
|
||||||
StreamingResponseEvent.ContentReceived(
|
|
||||||
msg
|
|
||||||
)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
||||||
responseHandle.handleEvent(StreamingResponseEvent.ExceptionCaught(cause))
|
val ex = when (cause) {
|
||||||
|
is DecoderException -> cause.cause!!
|
||||||
|
else -> cause
|
||||||
|
}
|
||||||
ctx.close()
|
ctx.close()
|
||||||
pipeline.removeLast()
|
pipeline.removeLast()
|
||||||
pool.release(channel)
|
pool.release(channel)
|
||||||
}
|
response.completeExceptionally(ex)
|
||||||
}
|
|
||||||
channel.pipeline()
|
|
||||||
.addLast("client-handler", handler)
|
|
||||||
response.complete(object : MemcacheRequestHandle {
|
|
||||||
override fun handleEvent(evt: StreamingRequestEvent) {
|
|
||||||
when (evt) {
|
|
||||||
is StreamingRequestEvent.SendRequest -> {
|
|
||||||
channel.writeAndFlush(evt.request)
|
|
||||||
}
|
|
||||||
|
|
||||||
is StreamingRequestEvent.SendLastChunk -> {
|
|
||||||
channel.writeAndFlush(DefaultLastMemcacheContent(evt.chunk))
|
|
||||||
val value = counter.incrementAndGet()
|
|
||||||
log.debug {
|
|
||||||
"Finished request counter: $value"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
is StreamingRequestEvent.SendChunk -> {
|
|
||||||
channel.writeAndFlush(DefaultMemcacheContent(evt.chunk))
|
|
||||||
}
|
|
||||||
|
|
||||||
is StreamingRequestEvent.ExceptionCaught -> {
|
|
||||||
responseHandle.handleEvent(StreamingResponseEvent.ExceptionCaught(evt.exception))
|
|
||||||
channel.close()
|
|
||||||
pipeline.removeLast()
|
|
||||||
pool.release(channel)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
|
request.touch()
|
||||||
|
channel.writeAndFlush(request)
|
||||||
} else {
|
} else {
|
||||||
response.completeExceptionally(channelFuture.cause())
|
response.completeExceptionally(channelFuture.cause())
|
||||||
}
|
}
|
||||||
@@ -173,6 +146,107 @@ class MemcacheClient(private val cfg: MemcacheCacheConfiguration) : AutoCloseabl
|
|||||||
return response
|
return response
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private fun encodeExpiry(expiry: Duration): Int {
|
||||||
|
val expirySeconds = expiry.toSeconds()
|
||||||
|
return expirySeconds.toInt().takeIf { it.toLong() == expirySeconds }
|
||||||
|
?: Instant.ofEpochSecond(expirySeconds).epochSecond.toInt()
|
||||||
|
}
|
||||||
|
|
||||||
|
fun get(key: String): CompletableFuture<ReadableByteChannel?> {
|
||||||
|
val request = (cfg.digestAlgorithm
|
||||||
|
?.let(MessageDigest::getInstance)
|
||||||
|
?.let { md ->
|
||||||
|
digest(key.toByteArray(), md)
|
||||||
|
} ?: key.toByteArray(Charsets.UTF_8)).let { digest ->
|
||||||
|
DefaultFullBinaryMemcacheRequest(Unpooled.wrappedBuffer(digest), null).apply {
|
||||||
|
setOpcode(BinaryMemcacheOpcodes.GET)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return sendRequest(request).thenApply { response ->
|
||||||
|
try {
|
||||||
|
when (val status = response.status()) {
|
||||||
|
BinaryMemcacheResponseStatus.SUCCESS -> {
|
||||||
|
val compressionMode = cfg.compressionMode
|
||||||
|
val content = response.content().retain()
|
||||||
|
content.touch()
|
||||||
|
if (compressionMode != null) {
|
||||||
|
when (compressionMode) {
|
||||||
|
MemcacheCacheConfiguration.CompressionMode.GZIP -> {
|
||||||
|
GZIPInputStream(ByteBufInputStream(content))
|
||||||
|
}
|
||||||
|
|
||||||
|
MemcacheCacheConfiguration.CompressionMode.DEFLATE -> {
|
||||||
|
InflaterInputStream(ByteBufInputStream(content))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
ByteBufInputStream(content)
|
||||||
|
}.let(Channels::newChannel)
|
||||||
|
}
|
||||||
|
|
||||||
|
BinaryMemcacheResponseStatus.KEY_ENOENT -> {
|
||||||
|
null
|
||||||
|
}
|
||||||
|
|
||||||
|
else -> throw MemcacheException(status)
|
||||||
|
}
|
||||||
|
} finally {
|
||||||
|
response.release()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
fun put(key: String, content: ByteBuf, expiry: Duration, cas: Long? = null): CompletableFuture<Void> {
|
||||||
|
val request = (cfg.digestAlgorithm
|
||||||
|
?.let(MessageDigest::getInstance)
|
||||||
|
?.let { md ->
|
||||||
|
digest(key.toByteArray(), md)
|
||||||
|
} ?: key.toByteArray(Charsets.UTF_8)).let { digest ->
|
||||||
|
val extras = Unpooled.buffer(8, 8)
|
||||||
|
extras.writeInt(0)
|
||||||
|
extras.writeInt(encodeExpiry(expiry))
|
||||||
|
val compressionMode = cfg.compressionMode
|
||||||
|
content.retain()
|
||||||
|
val payload = if (compressionMode != null) {
|
||||||
|
val inputStream = ByteBufInputStream(content)
|
||||||
|
val buf = content.alloc().buffer()
|
||||||
|
buf.retain()
|
||||||
|
val outputStream = when (compressionMode) {
|
||||||
|
MemcacheCacheConfiguration.CompressionMode.GZIP -> {
|
||||||
|
GZIPOutputStream(ByteBufOutputStream(buf))
|
||||||
|
}
|
||||||
|
|
||||||
|
MemcacheCacheConfiguration.CompressionMode.DEFLATE -> {
|
||||||
|
DeflaterOutputStream(ByteBufOutputStream(buf), Deflater(Deflater.DEFAULT_COMPRESSION, false))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
inputStream.use { i ->
|
||||||
|
outputStream.use { o ->
|
||||||
|
JWO.copy(i, o)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
buf
|
||||||
|
} else {
|
||||||
|
content
|
||||||
|
}
|
||||||
|
DefaultFullBinaryMemcacheRequest(Unpooled.wrappedBuffer(digest), extras, payload).apply {
|
||||||
|
setOpcode(BinaryMemcacheOpcodes.SET)
|
||||||
|
cas?.let(this::setCas)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return sendRequest(request).thenApply { response ->
|
||||||
|
try {
|
||||||
|
when (val status = response.status()) {
|
||||||
|
BinaryMemcacheResponseStatus.SUCCESS -> null
|
||||||
|
else -> throw MemcacheException(status)
|
||||||
|
}
|
||||||
|
} finally {
|
||||||
|
response.release()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
fun shutDown(): NettyFuture<*> {
|
fun shutDown(): NettyFuture<*> {
|
||||||
return group.shutdownGracefully()
|
return group.shutdownGracefully()
|
||||||
}
|
}
|
||||||
|
@@ -20,8 +20,7 @@
|
|||||||
<xs:element name="server" type="rbcs-memcache:memcacheServerType"/>
|
<xs:element name="server" type="rbcs-memcache:memcacheServerType"/>
|
||||||
</xs:sequence>
|
</xs:sequence>
|
||||||
<xs:attribute name="max-age" type="xs:duration" default="P1D"/>
|
<xs:attribute name="max-age" type="xs:duration" default="P1D"/>
|
||||||
<xs:attribute name="max-size" type="rbcs:byteSize" default="1048576"/>
|
<xs:attribute name="max-size" type="xs:unsignedInt" default="1048576"/>
|
||||||
<xs:attribute name="chunk-size" type="rbcs:byteSize" default="0x4000"/>
|
|
||||||
<xs:attribute name="digest" type="xs:token" />
|
<xs:attribute name="digest" type="xs:token" />
|
||||||
<xs:attribute name="compression-mode" type="rbcs-memcache:compressionType"/>
|
<xs:attribute name="compression-mode" type="rbcs-memcache:compressionType"/>
|
||||||
</xs:extension>
|
</xs:extension>
|
||||||
@@ -31,6 +30,7 @@
|
|||||||
<xs:simpleType name="compressionType">
|
<xs:simpleType name="compressionType">
|
||||||
<xs:restriction base="xs:token">
|
<xs:restriction base="xs:token">
|
||||||
<xs:enumeration value="deflate"/>
|
<xs:enumeration value="deflate"/>
|
||||||
|
<xs:enumeration value="gzip"/>
|
||||||
</xs:restriction>
|
</xs:restriction>
|
||||||
</xs:simpleType>
|
</xs:simpleType>
|
||||||
|
|
||||||
|
@@ -9,9 +9,6 @@ dependencies {
|
|||||||
implementation catalog.jwo
|
implementation catalog.jwo
|
||||||
implementation catalog.slf4j.api
|
implementation catalog.slf4j.api
|
||||||
implementation catalog.netty.codec.http
|
implementation catalog.netty.codec.http
|
||||||
implementation catalog.netty.handler
|
|
||||||
implementation catalog.netty.buffer
|
|
||||||
implementation catalog.netty.transport
|
|
||||||
|
|
||||||
api project(':rbcs-common')
|
api project(':rbcs-common')
|
||||||
api project(':rbcs-api')
|
api project(':rbcs-api')
|
||||||
@@ -39,4 +36,3 @@ publishing {
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
@@ -16,6 +16,7 @@ import io.netty.handler.codec.compression.CompressionOptions
|
|||||||
import io.netty.handler.codec.http.DefaultHttpContent
|
import io.netty.handler.codec.http.DefaultHttpContent
|
||||||
import io.netty.handler.codec.http.HttpContentCompressor
|
import io.netty.handler.codec.http.HttpContentCompressor
|
||||||
import io.netty.handler.codec.http.HttpHeaderNames
|
import io.netty.handler.codec.http.HttpHeaderNames
|
||||||
|
import io.netty.handler.codec.http.HttpObjectAggregator
|
||||||
import io.netty.handler.codec.http.HttpRequest
|
import io.netty.handler.codec.http.HttpRequest
|
||||||
import io.netty.handler.codec.http.HttpServerCodec
|
import io.netty.handler.codec.http.HttpServerCodec
|
||||||
import io.netty.handler.ssl.ClientAuth
|
import io.netty.handler.ssl.ClientAuth
|
||||||
@@ -29,13 +30,11 @@ import io.netty.handler.timeout.IdleStateHandler
|
|||||||
import io.netty.util.AttributeKey
|
import io.netty.util.AttributeKey
|
||||||
import io.netty.util.concurrent.DefaultEventExecutorGroup
|
import io.netty.util.concurrent.DefaultEventExecutorGroup
|
||||||
import io.netty.util.concurrent.EventExecutorGroup
|
import io.netty.util.concurrent.EventExecutorGroup
|
||||||
import net.woggioni.jwo.JWO
|
|
||||||
import net.woggioni.jwo.Tuple2
|
|
||||||
import net.woggioni.rbcs.api.Configuration
|
import net.woggioni.rbcs.api.Configuration
|
||||||
import net.woggioni.rbcs.api.exception.ConfigurationException
|
import net.woggioni.rbcs.api.exception.ConfigurationException
|
||||||
|
import net.woggioni.rbcs.common.RBCS.toUrl
|
||||||
import net.woggioni.rbcs.common.PasswordSecurity.decodePasswordHash
|
import net.woggioni.rbcs.common.PasswordSecurity.decodePasswordHash
|
||||||
import net.woggioni.rbcs.common.PasswordSecurity.hashPassword
|
import net.woggioni.rbcs.common.PasswordSecurity.hashPassword
|
||||||
import net.woggioni.rbcs.common.RBCS.toUrl
|
|
||||||
import net.woggioni.rbcs.common.Xml
|
import net.woggioni.rbcs.common.Xml
|
||||||
import net.woggioni.rbcs.common.contextLogger
|
import net.woggioni.rbcs.common.contextLogger
|
||||||
import net.woggioni.rbcs.common.debug
|
import net.woggioni.rbcs.common.debug
|
||||||
@@ -49,6 +48,8 @@ import net.woggioni.rbcs.server.configuration.Serializer
|
|||||||
import net.woggioni.rbcs.server.exception.ExceptionHandler
|
import net.woggioni.rbcs.server.exception.ExceptionHandler
|
||||||
import net.woggioni.rbcs.server.handler.ServerHandler
|
import net.woggioni.rbcs.server.handler.ServerHandler
|
||||||
import net.woggioni.rbcs.server.throttling.ThrottlingHandler
|
import net.woggioni.rbcs.server.throttling.ThrottlingHandler
|
||||||
|
import net.woggioni.jwo.JWO
|
||||||
|
import net.woggioni.jwo.Tuple2
|
||||||
import java.io.OutputStream
|
import java.io.OutputStream
|
||||||
import java.net.InetSocketAddress
|
import java.net.InetSocketAddress
|
||||||
import java.nio.file.Files
|
import java.nio.file.Files
|
||||||
@@ -58,7 +59,6 @@ import java.security.PrivateKey
|
|||||||
import java.security.cert.X509Certificate
|
import java.security.cert.X509Certificate
|
||||||
import java.util.Arrays
|
import java.util.Arrays
|
||||||
import java.util.Base64
|
import java.util.Base64
|
||||||
import java.util.concurrent.Future
|
|
||||||
import java.util.concurrent.TimeUnit
|
import java.util.concurrent.TimeUnit
|
||||||
import java.util.regex.Matcher
|
import java.util.regex.Matcher
|
||||||
import java.util.regex.Pattern
|
import java.util.regex.Pattern
|
||||||
@@ -128,12 +128,11 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
|
|||||||
val clientCertificate = peerCertificates.first() as X509Certificate
|
val clientCertificate = peerCertificates.first() as X509Certificate
|
||||||
val user = userExtractor?.extract(clientCertificate)
|
val user = userExtractor?.extract(clientCertificate)
|
||||||
val group = groupExtractor?.extract(clientCertificate)
|
val group = groupExtractor?.extract(clientCertificate)
|
||||||
val allGroups =
|
val allGroups = ((user?.groups ?: emptySet()).asSequence() + sequenceOf(group).filterNotNull()).toSet()
|
||||||
((user?.groups ?: emptySet()).asSequence() + sequenceOf(group).filterNotNull()).toSet()
|
|
||||||
AuthenticationResult(user, allGroups)
|
AuthenticationResult(user, allGroups)
|
||||||
} ?: anonymousUserGroups?.let { AuthenticationResult(null, it) }
|
} ?: anonymousUserGroups?.let{ AuthenticationResult(null, it) }
|
||||||
} catch (es: SSLPeerUnverifiedException) {
|
} catch (es: SSLPeerUnverifiedException) {
|
||||||
anonymousUserGroups?.let { AuthenticationResult(null, it) }
|
anonymousUserGroups?.let{ AuthenticationResult(null, it) }
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -192,7 +191,7 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
|
|||||||
private class ServerInitializer(
|
private class ServerInitializer(
|
||||||
private val cfg: Configuration,
|
private val cfg: Configuration,
|
||||||
private val eventExecutorGroup: EventExecutorGroup
|
private val eventExecutorGroup: EventExecutorGroup
|
||||||
) : ChannelInitializer<Channel>(), AutoCloseable {
|
) : ChannelInitializer<Channel>() {
|
||||||
|
|
||||||
companion object {
|
companion object {
|
||||||
private fun createSslCtx(tls: Configuration.Tls): SslContext {
|
private fun createSslCtx(tls: Configuration.Tls): SslContext {
|
||||||
@@ -214,7 +213,7 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
|
|||||||
trustManager(
|
trustManager(
|
||||||
ClientCertificateValidator.getTrustManager(ts, trustStore.isCheckCertificateStatus)
|
ClientCertificateValidator.getTrustManager(ts, trustStore.isCheckCertificateStatus)
|
||||||
)
|
)
|
||||||
if (trustStore.isRequireClientCertificate) ClientAuth.REQUIRE
|
if(trustStore.isRequireClientCertificate) ClientAuth.REQUIRE
|
||||||
else ClientAuth.OPTIONAL
|
else ClientAuth.OPTIONAL
|
||||||
} ?: ClientAuth.NONE
|
} ?: ClientAuth.NONE
|
||||||
clientAuth(clientAuth)
|
clientAuth(clientAuth)
|
||||||
@@ -246,9 +245,14 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
|
|||||||
|
|
||||||
private val log = contextLogger()
|
private val log = contextLogger()
|
||||||
|
|
||||||
private val cache = cfg.cache.materialize()
|
private val serverHandler = let {
|
||||||
|
val cacheImplementation = cfg.cache.materialize()
|
||||||
|
val prefix = Path.of("/").resolve(Path.of(cfg.serverPath ?: "/"))
|
||||||
|
ServerHandler(cacheImplementation, prefix)
|
||||||
|
}
|
||||||
|
|
||||||
private val exceptionHandler = ExceptionHandler()
|
private val exceptionHandler = ExceptionHandler()
|
||||||
|
private val throttlingHandler = ThrottlingHandler(cfg)
|
||||||
|
|
||||||
private val authenticator = when (val auth = cfg.authentication) {
|
private val authenticator = when (val auth = cfg.authentication) {
|
||||||
is Configuration.BasicAuthentication -> NettyHttpBasicAuthenticator(cfg.users, RoleAuthorizer())
|
is Configuration.BasicAuthentication -> NettyHttpBasicAuthenticator(cfg.users, RoleAuthorizer())
|
||||||
@@ -307,7 +311,7 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
|
|||||||
cfg.connection.also { conn ->
|
cfg.connection.also { conn ->
|
||||||
val readTimeout = conn.readTimeout.toMillis()
|
val readTimeout = conn.readTimeout.toMillis()
|
||||||
val writeTimeout = conn.writeTimeout.toMillis()
|
val writeTimeout = conn.writeTimeout.toMillis()
|
||||||
if (readTimeout > 0 || writeTimeout > 0) {
|
if(readTimeout > 0 || writeTimeout > 0) {
|
||||||
pipeline.addLast(
|
pipeline.addLast(
|
||||||
IdleStateHandler(
|
IdleStateHandler(
|
||||||
false,
|
false,
|
||||||
@@ -321,7 +325,7 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
|
|||||||
val readIdleTimeout = conn.readIdleTimeout.toMillis()
|
val readIdleTimeout = conn.readIdleTimeout.toMillis()
|
||||||
val writeIdleTimeout = conn.writeIdleTimeout.toMillis()
|
val writeIdleTimeout = conn.writeIdleTimeout.toMillis()
|
||||||
val idleTimeout = conn.idleTimeout.toMillis()
|
val idleTimeout = conn.idleTimeout.toMillis()
|
||||||
if (readIdleTimeout > 0 || writeIdleTimeout > 0 || idleTimeout > 0) {
|
if(readIdleTimeout > 0 || writeIdleTimeout > 0 || idleTimeout > 0) {
|
||||||
pipeline.addLast(
|
pipeline.addLast(
|
||||||
IdleStateHandler(
|
IdleStateHandler(
|
||||||
true,
|
true,
|
||||||
@@ -336,19 +340,16 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
|
|||||||
pipeline.addLast(object : ChannelInboundHandlerAdapter() {
|
pipeline.addLast(object : ChannelInboundHandlerAdapter() {
|
||||||
override fun userEventTriggered(ctx: ChannelHandlerContext, evt: Any) {
|
override fun userEventTriggered(ctx: ChannelHandlerContext, evt: Any) {
|
||||||
if (evt is IdleStateEvent) {
|
if (evt is IdleStateEvent) {
|
||||||
when (evt.state()) {
|
when(evt.state()) {
|
||||||
IdleState.READER_IDLE -> log.debug {
|
IdleState.READER_IDLE -> log.debug {
|
||||||
"Read timeout reached on channel ${ch.id().asShortText()}, closing the connection"
|
"Read timeout reached on channel ${ch.id().asShortText()}, closing the connection"
|
||||||
}
|
}
|
||||||
|
|
||||||
IdleState.WRITER_IDLE -> log.debug {
|
IdleState.WRITER_IDLE -> log.debug {
|
||||||
"Write timeout reached on channel ${ch.id().asShortText()}, closing the connection"
|
"Write timeout reached on channel ${ch.id().asShortText()}, closing the connection"
|
||||||
}
|
}
|
||||||
|
|
||||||
IdleState.ALL_IDLE -> log.debug {
|
IdleState.ALL_IDLE -> log.debug {
|
||||||
"Idle timeout reached on channel ${ch.id().asShortText()}, closing the connection"
|
"Idle timeout reached on channel ${ch.id().asShortText()}, closing the connection"
|
||||||
}
|
}
|
||||||
|
|
||||||
null -> throw IllegalStateException("This should never happen")
|
null -> throw IllegalStateException("This should never happen")
|
||||||
}
|
}
|
||||||
ctx.close()
|
ctx.close()
|
||||||
@@ -361,53 +362,34 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
|
|||||||
pipeline.addLast(HttpServerCodec())
|
pipeline.addLast(HttpServerCodec())
|
||||||
pipeline.addLast(HttpChunkContentCompressor(1024))
|
pipeline.addLast(HttpChunkContentCompressor(1024))
|
||||||
pipeline.addLast(ChunkedWriteHandler())
|
pipeline.addLast(ChunkedWriteHandler())
|
||||||
|
pipeline.addLast(HttpObjectAggregator(cfg.connection.maxRequestSize))
|
||||||
authenticator?.let {
|
authenticator?.let {
|
||||||
pipeline.addLast(it)
|
pipeline.addLast(it)
|
||||||
}
|
}
|
||||||
pipeline.addLast(ThrottlingHandler(cfg))
|
pipeline.addLast(throttlingHandler)
|
||||||
|
|
||||||
val serverHandler = let {
|
|
||||||
val prefix = Path.of("/").resolve(Path.of(cfg.serverPath ?: "/"))
|
|
||||||
ServerHandler(cache, prefix)
|
|
||||||
}
|
|
||||||
pipeline.addLast(eventExecutorGroup, serverHandler)
|
pipeline.addLast(eventExecutorGroup, serverHandler)
|
||||||
pipeline.addLast(exceptionHandler)
|
pipeline.addLast(exceptionHandler)
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun close() {
|
|
||||||
cache.close()
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
class ServerHandle(
|
class ServerHandle(
|
||||||
httpChannelFuture: ChannelFuture,
|
httpChannelFuture: ChannelFuture,
|
||||||
private val executorGroups: Iterable<EventExecutorGroup>,
|
private val executorGroups: Iterable<EventExecutorGroup>
|
||||||
private val serverInitializer: AutoCloseable
|
|
||||||
) : AutoCloseable {
|
) : AutoCloseable {
|
||||||
private val httpChannel: Channel = httpChannelFuture.channel()
|
private val httpChannel: Channel = httpChannelFuture.channel()
|
||||||
private val closeFuture: ChannelFuture = httpChannel.closeFuture()
|
private val closeFuture: ChannelFuture = httpChannel.closeFuture()
|
||||||
private val log = contextLogger()
|
private val log = contextLogger()
|
||||||
|
|
||||||
fun shutdown(): Future<Void> {
|
fun shutdown(): ChannelFuture {
|
||||||
return httpChannel.close()
|
return httpChannel.close()
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun close() {
|
override fun close() {
|
||||||
try {
|
try {
|
||||||
closeFuture.sync()
|
closeFuture.sync()
|
||||||
} catch (ex: Throwable) {
|
} finally {
|
||||||
log.error(ex.message, ex)
|
|
||||||
}
|
|
||||||
try {
|
|
||||||
serverInitializer.close()
|
|
||||||
} catch (ex: Throwable) {
|
|
||||||
log.error(ex.message, ex)
|
|
||||||
}
|
|
||||||
executorGroups.forEach {
|
executorGroups.forEach {
|
||||||
try {
|
|
||||||
it.shutdownGracefully().sync()
|
it.shutdownGracefully().sync()
|
||||||
} catch (ex: Throwable) {
|
|
||||||
log.error(ex.message, ex)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
log.info {
|
log.info {
|
||||||
@@ -429,12 +411,11 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
|
|||||||
}
|
}
|
||||||
DefaultEventExecutorGroup(Runtime.getRuntime().availableProcessors(), threadFactory)
|
DefaultEventExecutorGroup(Runtime.getRuntime().availableProcessors(), threadFactory)
|
||||||
}
|
}
|
||||||
val serverInitializer = ServerInitializer(cfg, eventExecutorGroup)
|
|
||||||
val bootstrap = ServerBootstrap().apply {
|
val bootstrap = ServerBootstrap().apply {
|
||||||
// Configure the server
|
// Configure the server
|
||||||
group(bossGroup, workerGroup)
|
group(bossGroup, workerGroup)
|
||||||
channel(serverSocketChannel)
|
channel(serverSocketChannel)
|
||||||
childHandler(serverInitializer)
|
childHandler(ServerInitializer(cfg, eventExecutorGroup))
|
||||||
option(ChannelOption.SO_BACKLOG, cfg.incomingConnectionsBacklogSize)
|
option(ChannelOption.SO_BACKLOG, cfg.incomingConnectionsBacklogSize)
|
||||||
childOption(ChannelOption.SO_KEEPALIVE, true)
|
childOption(ChannelOption.SO_KEEPALIVE, true)
|
||||||
}
|
}
|
||||||
@@ -446,6 +427,6 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
|
|||||||
log.info {
|
log.info {
|
||||||
"RemoteBuildCacheServer is listening on ${cfg.host}:${cfg.port}"
|
"RemoteBuildCacheServer is listening on ${cfg.host}:${cfg.port}"
|
||||||
}
|
}
|
||||||
return ServerHandle(httpChannel, setOf(bossGroup, workerGroup, eventExecutorGroup), serverInitializer)
|
return ServerHandle(httpChannel, setOf(bossGroup, workerGroup, eventExecutorGroup))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@@ -6,7 +6,6 @@ import io.netty.channel.ChannelHandlerContext
|
|||||||
import io.netty.channel.ChannelInboundHandlerAdapter
|
import io.netty.channel.ChannelInboundHandlerAdapter
|
||||||
import io.netty.handler.codec.http.DefaultFullHttpResponse
|
import io.netty.handler.codec.http.DefaultFullHttpResponse
|
||||||
import io.netty.handler.codec.http.FullHttpResponse
|
import io.netty.handler.codec.http.FullHttpResponse
|
||||||
import io.netty.handler.codec.http.HttpContent
|
|
||||||
import io.netty.handler.codec.http.HttpHeaderNames
|
import io.netty.handler.codec.http.HttpHeaderNames
|
||||||
import io.netty.handler.codec.http.HttpRequest
|
import io.netty.handler.codec.http.HttpRequest
|
||||||
import io.netty.handler.codec.http.HttpResponseStatus
|
import io.netty.handler.codec.http.HttpResponseStatus
|
||||||
@@ -58,8 +57,6 @@ abstract class AbstractNettyHttpAuthenticator(private val authorizer: Authorizer
|
|||||||
} else {
|
} else {
|
||||||
authorizationFailure(ctx, msg)
|
authorizationFailure(ctx, msg)
|
||||||
}
|
}
|
||||||
} else if(msg is HttpContent) {
|
|
||||||
ctx.fireChannelRead(msg)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -1,16 +1,13 @@
|
|||||||
package net.woggioni.rbcs.server.cache
|
package net.woggioni.rbcs.server.cache
|
||||||
|
|
||||||
import io.netty.buffer.ByteBufAllocator
|
import io.netty.buffer.ByteBuf
|
||||||
import net.woggioni.jwo.JWO
|
|
||||||
import net.woggioni.rbcs.api.Cache
|
import net.woggioni.rbcs.api.Cache
|
||||||
import net.woggioni.rbcs.api.RequestHandle
|
import net.woggioni.rbcs.common.ByteBufInputStream
|
||||||
import net.woggioni.rbcs.api.ResponseHandle
|
|
||||||
import net.woggioni.rbcs.api.event.RequestStreamingEvent
|
|
||||||
import net.woggioni.rbcs.api.event.ResponseStreamingEvent
|
|
||||||
import net.woggioni.rbcs.common.ByteBufOutputStream
|
|
||||||
import net.woggioni.rbcs.common.RBCS.digestString
|
import net.woggioni.rbcs.common.RBCS.digestString
|
||||||
import net.woggioni.rbcs.common.contextLogger
|
import net.woggioni.rbcs.common.contextLogger
|
||||||
import net.woggioni.rbcs.common.extractChunk
|
import net.woggioni.jwo.JWO
|
||||||
|
import net.woggioni.jwo.LockFile
|
||||||
|
import java.nio.channels.Channels
|
||||||
import java.nio.channels.FileChannel
|
import java.nio.channels.FileChannel
|
||||||
import java.nio.file.Files
|
import java.nio.file.Files
|
||||||
import java.nio.file.Path
|
import java.nio.file.Path
|
||||||
@@ -21,8 +18,10 @@ import java.security.MessageDigest
|
|||||||
import java.time.Duration
|
import java.time.Duration
|
||||||
import java.time.Instant
|
import java.time.Instant
|
||||||
import java.util.concurrent.CompletableFuture
|
import java.util.concurrent.CompletableFuture
|
||||||
|
import java.util.concurrent.atomic.AtomicReference
|
||||||
import java.util.zip.Deflater
|
import java.util.zip.Deflater
|
||||||
import java.util.zip.DeflaterOutputStream
|
import java.util.zip.DeflaterOutputStream
|
||||||
|
import java.util.zip.Inflater
|
||||||
import java.util.zip.InflaterInputStream
|
import java.util.zip.InflaterInputStream
|
||||||
|
|
||||||
class FileSystemCache(
|
class FileSystemCache(
|
||||||
@@ -30,8 +29,7 @@ class FileSystemCache(
|
|||||||
val maxAge: Duration,
|
val maxAge: Duration,
|
||||||
val digestAlgorithm: String?,
|
val digestAlgorithm: String?,
|
||||||
val compressionEnabled: Boolean,
|
val compressionEnabled: Boolean,
|
||||||
val compressionLevel: Int,
|
val compressionLevel: Int
|
||||||
val chunkSize: Int
|
|
||||||
) : Cache {
|
) : Cache {
|
||||||
|
|
||||||
private companion object {
|
private companion object {
|
||||||
@@ -43,13 +41,9 @@ class FileSystemCache(
|
|||||||
Files.createDirectories(root)
|
Files.createDirectories(root)
|
||||||
}
|
}
|
||||||
|
|
||||||
@Volatile
|
private var nextGc = AtomicReference(Instant.now().plus(maxAge))
|
||||||
private var running = true
|
|
||||||
|
|
||||||
private var nextGc = Instant.now()
|
override fun get(key: String) = (digestAlgorithm
|
||||||
|
|
||||||
override fun get(key: String, responseHandle: ResponseHandle, alloc: ByteBufAllocator) {
|
|
||||||
(digestAlgorithm
|
|
||||||
?.let(MessageDigest::getInstance)
|
?.let(MessageDigest::getInstance)
|
||||||
?.let { md ->
|
?.let { md ->
|
||||||
digestString(key.toByteArray(), md)
|
digestString(key.toByteArray(), md)
|
||||||
@@ -57,57 +51,30 @@ class FileSystemCache(
|
|||||||
root.resolve(digest).takeIf(Files::exists)
|
root.resolve(digest).takeIf(Files::exists)
|
||||||
?.let { file ->
|
?.let { file ->
|
||||||
file.takeIf(Files::exists)?.let { file ->
|
file.takeIf(Files::exists)?.let { file ->
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.RESPONSE_RECEIVED)
|
|
||||||
if (compressionEnabled) {
|
if (compressionEnabled) {
|
||||||
val compositeBuffer = alloc.compositeBuffer()
|
val inflater = Inflater()
|
||||||
ByteBufOutputStream(compositeBuffer).use { outputStream ->
|
Channels.newChannel(
|
||||||
InflaterInputStream(Files.newInputStream(file)).use { inputStream ->
|
InflaterInputStream(
|
||||||
val ioBuffer = alloc.buffer(chunkSize)
|
Channels.newInputStream(
|
||||||
try {
|
FileChannel.open(
|
||||||
while (true) {
|
file,
|
||||||
val read = ioBuffer.writeBytes(inputStream, chunkSize)
|
StandardOpenOption.READ
|
||||||
val last = read < 0
|
)
|
||||||
if (read > 0) {
|
), inflater
|
||||||
ioBuffer.readBytes(outputStream, read)
|
)
|
||||||
}
|
)
|
||||||
if (last) {
|
|
||||||
compositeBuffer.retain()
|
|
||||||
outputStream.close()
|
|
||||||
}
|
|
||||||
if (compositeBuffer.readableBytes() >= chunkSize || last) {
|
|
||||||
val chunk = extractChunk(compositeBuffer, alloc)
|
|
||||||
val evt = if (last) {
|
|
||||||
ResponseStreamingEvent.LastChunkReceived(chunk)
|
|
||||||
} else {
|
} else {
|
||||||
ResponseStreamingEvent.ChunkReceived(chunk)
|
|
||||||
}
|
|
||||||
responseHandle.handleEvent(evt)
|
|
||||||
}
|
|
||||||
if (last) break
|
|
||||||
}
|
|
||||||
} finally {
|
|
||||||
ioBuffer.release()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
responseHandle.handleEvent(
|
|
||||||
ResponseStreamingEvent.FileReceived(
|
|
||||||
FileChannel.open(file, StandardOpenOption.READ)
|
FileChannel.open(file, StandardOpenOption.READ)
|
||||||
)
|
|
||||||
)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} ?: responseHandle.handleEvent(ResponseStreamingEvent.NOT_FOUND)
|
}.also {
|
||||||
|
gc()
|
||||||
|
}.let {
|
||||||
|
CompletableFuture.completedFuture(it)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun put(
|
override fun put(key: String, content: ByteBuf): CompletableFuture<Void> {
|
||||||
key: String,
|
|
||||||
responseHandle: ResponseHandle,
|
|
||||||
alloc: ByteBufAllocator
|
|
||||||
): CompletableFuture<RequestHandle> {
|
|
||||||
try {
|
|
||||||
(digestAlgorithm
|
(digestAlgorithm
|
||||||
?.let(MessageDigest::getInstance)
|
?.let(MessageDigest::getInstance)
|
||||||
?.let { md ->
|
?.let { md ->
|
||||||
@@ -115,87 +82,49 @@ class FileSystemCache(
|
|||||||
} ?: key).let { digest ->
|
} ?: key).let { digest ->
|
||||||
val file = root.resolve(digest)
|
val file = root.resolve(digest)
|
||||||
val tmpFile = Files.createTempFile(root, null, ".tmp")
|
val tmpFile = Files.createTempFile(root, null, ".tmp")
|
||||||
val stream = Files.newOutputStream(tmpFile).let {
|
try {
|
||||||
|
Files.newOutputStream(tmpFile).let {
|
||||||
if (compressionEnabled) {
|
if (compressionEnabled) {
|
||||||
val deflater = Deflater(compressionLevel)
|
val deflater = Deflater(compressionLevel)
|
||||||
DeflaterOutputStream(it, deflater)
|
DeflaterOutputStream(it, deflater)
|
||||||
} else {
|
} else {
|
||||||
it
|
it
|
||||||
}
|
}
|
||||||
|
}.use {
|
||||||
|
JWO.copy(ByteBufInputStream(content), it)
|
||||||
}
|
}
|
||||||
return CompletableFuture.completedFuture(object : RequestHandle {
|
|
||||||
override fun handleEvent(evt: RequestStreamingEvent) {
|
|
||||||
try {
|
|
||||||
when (evt) {
|
|
||||||
is RequestStreamingEvent.LastChunkReceived -> {
|
|
||||||
evt.chunk.readBytes(stream, evt.chunk.readableBytes())
|
|
||||||
stream.close()
|
|
||||||
Files.move(tmpFile, file, StandardCopyOption.ATOMIC_MOVE)
|
Files.move(tmpFile, file, StandardCopyOption.ATOMIC_MOVE)
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.RESPONSE_RECEIVED)
|
} catch (t: Throwable) {
|
||||||
}
|
|
||||||
|
|
||||||
is RequestStreamingEvent.ChunkReceived -> {
|
|
||||||
evt.chunk.readBytes(stream, evt.chunk.readableBytes())
|
|
||||||
}
|
|
||||||
|
|
||||||
is RequestStreamingEvent.ExceptionCaught -> {
|
|
||||||
Files.delete(tmpFile)
|
Files.delete(tmpFile)
|
||||||
stream.close()
|
throw t
|
||||||
}
|
}
|
||||||
}
|
}.also {
|
||||||
} catch (ex: Throwable) {
|
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.ExceptionCaught(ex))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
} catch (ex: Throwable) {
|
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.ExceptionCaught(ex))
|
|
||||||
return CompletableFuture.failedFuture(ex)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private val garbageCollector = Thread.ofVirtual().name("file-system-cache-gc").start {
|
|
||||||
while (running) {
|
|
||||||
gc()
|
gc()
|
||||||
}
|
}
|
||||||
|
return CompletableFuture.completedFuture(null)
|
||||||
}
|
}
|
||||||
|
|
||||||
private fun gc() {
|
private fun gc() {
|
||||||
val now = Instant.now()
|
val now = Instant.now()
|
||||||
if (nextGc < now) {
|
val oldValue = nextGc.getAndSet(now.plus(maxAge))
|
||||||
val oldestEntry = actualGc(now)
|
if (oldValue < now) {
|
||||||
nextGc = (oldestEntry ?: now).plus(maxAge)
|
actualGc(now)
|
||||||
}
|
}
|
||||||
Thread.sleep(minOf(Duration.between(now, nextGc), Duration.ofSeconds(1)))
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
@Synchronized
|
||||||
* Returns the creation timestamp of the oldest cache entry (if any)
|
private fun actualGc(now: Instant) {
|
||||||
*/
|
Files.list(root).filter {
|
||||||
private fun actualGc(now: Instant): Instant? {
|
|
||||||
var result: Instant? = null
|
|
||||||
Files.list(root)
|
|
||||||
.filter { path ->
|
|
||||||
JWO.splitExtension(path)
|
|
||||||
.map { it._2 }
|
|
||||||
.map { it != ".tmp" }
|
|
||||||
.orElse(true)
|
|
||||||
}
|
|
||||||
.filter {
|
|
||||||
val creationTimeStamp = Files.readAttributes(it, BasicFileAttributes::class.java)
|
val creationTimeStamp = Files.readAttributes(it, BasicFileAttributes::class.java)
|
||||||
.creationTime()
|
.creationTime()
|
||||||
.toInstant()
|
.toInstant()
|
||||||
if (result == null || creationTimeStamp < result) {
|
|
||||||
result = creationTimeStamp
|
|
||||||
}
|
|
||||||
now > creationTimeStamp.plus(maxAge)
|
now > creationTimeStamp.plus(maxAge)
|
||||||
}.forEach(Files::delete)
|
}.forEach { file ->
|
||||||
return result
|
LockFile.acquire(file, false).use {
|
||||||
|
Files.delete(file)
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun close() {
|
override fun close() {}
|
||||||
running = false
|
|
||||||
garbageCollector.join()
|
|
||||||
}
|
|
||||||
}
|
}
|
@@ -12,15 +12,13 @@ data class FileSystemCacheConfiguration(
|
|||||||
val digestAlgorithm : String?,
|
val digestAlgorithm : String?,
|
||||||
val compressionEnabled: Boolean,
|
val compressionEnabled: Boolean,
|
||||||
val compressionLevel: Int,
|
val compressionLevel: Int,
|
||||||
val chunkSize: Int,
|
|
||||||
) : Configuration.Cache {
|
) : Configuration.Cache {
|
||||||
override fun materialize() = FileSystemCache(
|
override fun materialize() = FileSystemCache(
|
||||||
root ?: Application.builder("rbcs").build().computeCacheDirectory(),
|
root ?: Application.builder("rbcs").build().computeCacheDirectory(),
|
||||||
maxAge,
|
maxAge,
|
||||||
digestAlgorithm,
|
digestAlgorithm,
|
||||||
compressionEnabled,
|
compressionEnabled,
|
||||||
compressionLevel,
|
compressionLevel
|
||||||
chunkSize,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
override fun getNamespaceURI() = RBCS.RBCS_NAMESPACE_URI
|
override fun getNamespaceURI() = RBCS.RBCS_NAMESPACE_URI
|
||||||
|
@@ -31,17 +31,13 @@ class FileSystemCacheProvider : CacheProvider<FileSystemCacheConfiguration> {
|
|||||||
?.let(String::toInt)
|
?.let(String::toInt)
|
||||||
?: Deflater.DEFAULT_COMPRESSION
|
?: Deflater.DEFAULT_COMPRESSION
|
||||||
val digestAlgorithm = el.renderAttribute("digest") ?: "MD5"
|
val digestAlgorithm = el.renderAttribute("digest") ?: "MD5"
|
||||||
val chunkSize = el.renderAttribute("chunk-size")
|
|
||||||
?.let(Integer::decode)
|
|
||||||
?: 0x4000
|
|
||||||
|
|
||||||
return FileSystemCacheConfiguration(
|
return FileSystemCacheConfiguration(
|
||||||
path,
|
path,
|
||||||
maxAge,
|
maxAge,
|
||||||
digestAlgorithm,
|
digestAlgorithm,
|
||||||
enableCompression,
|
enableCompression,
|
||||||
compressionLevel,
|
compressionLevel
|
||||||
chunkSize
|
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -61,7 +57,6 @@ class FileSystemCacheProvider : CacheProvider<FileSystemCacheConfiguration> {
|
|||||||
}?.let {
|
}?.let {
|
||||||
attr("compression-level", it.toString())
|
attr("compression-level", it.toString())
|
||||||
}
|
}
|
||||||
attr("chunk-size", chunkSize.toString())
|
|
||||||
}
|
}
|
||||||
result
|
result
|
||||||
}
|
}
|
||||||
|
@@ -1,36 +1,31 @@
|
|||||||
package net.woggioni.rbcs.server.cache
|
package net.woggioni.rbcs.server.cache
|
||||||
|
|
||||||
import io.netty.buffer.ByteBuf
|
import io.netty.buffer.ByteBuf
|
||||||
import io.netty.buffer.ByteBufAllocator
|
|
||||||
import net.woggioni.rbcs.api.Cache
|
import net.woggioni.rbcs.api.Cache
|
||||||
import net.woggioni.rbcs.api.RequestHandle
|
import net.woggioni.rbcs.common.ByteBufInputStream
|
||||||
import net.woggioni.rbcs.api.ResponseHandle
|
|
||||||
import net.woggioni.rbcs.api.event.RequestStreamingEvent
|
|
||||||
import net.woggioni.rbcs.api.event.ResponseStreamingEvent
|
|
||||||
import net.woggioni.rbcs.common.ByteBufOutputStream
|
import net.woggioni.rbcs.common.ByteBufOutputStream
|
||||||
import net.woggioni.rbcs.common.RBCS.digestString
|
import net.woggioni.rbcs.common.RBCS.digestString
|
||||||
import net.woggioni.rbcs.common.contextLogger
|
import net.woggioni.rbcs.common.contextLogger
|
||||||
import net.woggioni.rbcs.common.extractChunk
|
import net.woggioni.jwo.JWO
|
||||||
|
import java.nio.channels.Channels
|
||||||
import java.security.MessageDigest
|
import java.security.MessageDigest
|
||||||
import java.time.Duration
|
import java.time.Duration
|
||||||
import java.time.Instant
|
import java.time.Instant
|
||||||
import java.util.concurrent.CompletableFuture
|
import java.util.concurrent.CompletableFuture
|
||||||
import java.util.concurrent.ConcurrentHashMap
|
import java.util.concurrent.ConcurrentHashMap
|
||||||
import java.util.concurrent.PriorityBlockingQueue
|
import java.util.concurrent.PriorityBlockingQueue
|
||||||
import java.util.concurrent.TimeUnit
|
|
||||||
import java.util.concurrent.atomic.AtomicLong
|
import java.util.concurrent.atomic.AtomicLong
|
||||||
import java.util.zip.Deflater
|
import java.util.zip.Deflater
|
||||||
import java.util.zip.DeflaterOutputStream
|
import java.util.zip.DeflaterOutputStream
|
||||||
import java.util.zip.Inflater
|
import java.util.zip.Inflater
|
||||||
import java.util.zip.InflaterOutputStream
|
import java.util.zip.InflaterInputStream
|
||||||
|
|
||||||
class InMemoryCache(
|
class InMemoryCache(
|
||||||
private val maxAge: Duration,
|
val maxAge: Duration,
|
||||||
private val maxSize: Long,
|
val maxSize: Long,
|
||||||
private val digestAlgorithm: String?,
|
val digestAlgorithm: String?,
|
||||||
private val compressionEnabled: Boolean,
|
val compressionEnabled: Boolean,
|
||||||
private val compressionLevel: Int,
|
val compressionLevel: Int
|
||||||
private val chunkSize : Int
|
|
||||||
) : Cache {
|
) : Cache {
|
||||||
|
|
||||||
companion object {
|
companion object {
|
||||||
@@ -41,41 +36,44 @@ class InMemoryCache(
|
|||||||
private val size = AtomicLong()
|
private val size = AtomicLong()
|
||||||
private val map = ConcurrentHashMap<String, ByteBuf>()
|
private val map = ConcurrentHashMap<String, ByteBuf>()
|
||||||
|
|
||||||
private class RemovalQueueElement(val key: String, val value: ByteBuf, val expiry: Instant) :
|
private class RemovalQueueElement(val key: String, val value : ByteBuf, val expiry : Instant) : Comparable<RemovalQueueElement> {
|
||||||
Comparable<RemovalQueueElement> {
|
|
||||||
override fun compareTo(other: RemovalQueueElement) = expiry.compareTo(other.expiry)
|
override fun compareTo(other: RemovalQueueElement) = expiry.compareTo(other.expiry)
|
||||||
}
|
}
|
||||||
|
|
||||||
private val removalQueue = PriorityBlockingQueue<RemovalQueueElement>()
|
private val removalQueue = PriorityBlockingQueue<RemovalQueueElement>()
|
||||||
|
|
||||||
@Volatile
|
|
||||||
private var running = true
|
private var running = true
|
||||||
|
private val garbageCollector = Thread {
|
||||||
private val garbageCollector = Thread.ofVirtual().name("in-memory-cache-gc").start {
|
while(true) {
|
||||||
while (running) {
|
val el = removalQueue.take()
|
||||||
val el = removalQueue.poll(1, TimeUnit.SECONDS) ?: continue
|
|
||||||
val buf = el.value
|
val buf = el.value
|
||||||
val now = Instant.now()
|
val now = Instant.now()
|
||||||
if (now > el.expiry) {
|
if(now > el.expiry) {
|
||||||
val removed = map.remove(el.key, buf)
|
val removed = map.remove(el.key, buf)
|
||||||
if (removed) {
|
if(removed) {
|
||||||
updateSizeAfterRemoval(buf)
|
updateSizeAfterRemoval(buf)
|
||||||
//Decrease the reference count for map
|
//Decrease the reference count for map
|
||||||
buf.release()
|
buf.release()
|
||||||
}
|
}
|
||||||
|
//Decrease the reference count for removalQueue
|
||||||
|
buf.release()
|
||||||
} else {
|
} else {
|
||||||
removalQueue.put(el)
|
removalQueue.put(el)
|
||||||
Thread.sleep(minOf(Duration.between(now, el.expiry), Duration.ofSeconds(1)))
|
Thread.sleep(minOf(Duration.between(now, el.expiry), Duration.ofSeconds(1)))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}.apply {
|
||||||
|
start()
|
||||||
}
|
}
|
||||||
|
|
||||||
private fun removeEldest(): Long {
|
private fun removeEldest() : Long {
|
||||||
while (true) {
|
while(true) {
|
||||||
val el = removalQueue.take()
|
val el = removalQueue.take()
|
||||||
val buf = el.value
|
val buf = el.value
|
||||||
val removed = map.remove(el.key, buf)
|
val removed = map.remove(el.key, buf)
|
||||||
if (removed) {
|
//Decrease the reference count for removalQueue
|
||||||
|
buf.release()
|
||||||
|
if(removed) {
|
||||||
val newSize = updateSizeAfterRemoval(buf)
|
val newSize = updateSizeAfterRemoval(buf)
|
||||||
//Decrease the reference count for map
|
//Decrease the reference count for map
|
||||||
buf.release()
|
buf.release()
|
||||||
@@ -84,8 +82,8 @@ class InMemoryCache(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private fun updateSizeAfterRemoval(removed: ByteBuf): Long {
|
private fun updateSizeAfterRemoval(removed: ByteBuf) : Long {
|
||||||
return size.updateAndGet { currentSize: Long ->
|
return size.updateAndGet { currentSize : Long ->
|
||||||
currentSize - removed.readableBytes()
|
currentSize - removed.readableBytes()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -95,8 +93,7 @@ class InMemoryCache(
|
|||||||
garbageCollector.join()
|
garbageCollector.join()
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun get(key: String, responseHandle: ResponseHandle, alloc: ByteBufAllocator) {
|
override fun get(key: String) =
|
||||||
try {
|
|
||||||
(digestAlgorithm
|
(digestAlgorithm
|
||||||
?.let(MessageDigest::getInstance)
|
?.let(MessageDigest::getInstance)
|
||||||
?.let { md ->
|
?.let { md ->
|
||||||
@@ -106,103 +103,48 @@ class InMemoryCache(
|
|||||||
map[digest]
|
map[digest]
|
||||||
?.let { value ->
|
?.let { value ->
|
||||||
val copy = value.retainedDuplicate()
|
val copy = value.retainedDuplicate()
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.RESPONSE_RECEIVED)
|
copy.touch("This has to be released by the caller of the cache")
|
||||||
val output = alloc.compositeBuffer()
|
|
||||||
if (compressionEnabled) {
|
if (compressionEnabled) {
|
||||||
try {
|
|
||||||
val stream = ByteBufOutputStream(output).let {
|
|
||||||
val inflater = Inflater()
|
val inflater = Inflater()
|
||||||
InflaterOutputStream(it, inflater)
|
Channels.newChannel(InflaterInputStream(ByteBufInputStream(copy), inflater))
|
||||||
}
|
|
||||||
stream.use { os ->
|
|
||||||
var readable = copy.readableBytes()
|
|
||||||
while (true) {
|
|
||||||
copy.readBytes(os, chunkSize.coerceAtMost(readable))
|
|
||||||
readable = copy.readableBytes()
|
|
||||||
val last = readable == 0
|
|
||||||
if (last) stream.flush()
|
|
||||||
if (output.readableBytes() >= chunkSize || last) {
|
|
||||||
val chunk = extractChunk(output, alloc)
|
|
||||||
val evt = if (last) {
|
|
||||||
ResponseStreamingEvent.LastChunkReceived(chunk)
|
|
||||||
} else {
|
} else {
|
||||||
ResponseStreamingEvent.ChunkReceived(chunk)
|
Channels.newChannel(ByteBufInputStream(copy))
|
||||||
}
|
|
||||||
responseHandle.handleEvent(evt)
|
|
||||||
}
|
|
||||||
if (last) break
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
} finally {
|
}.let {
|
||||||
copy.release()
|
CompletableFuture.completedFuture(it)
|
||||||
}
|
|
||||||
} else {
|
|
||||||
responseHandle.handleEvent(
|
|
||||||
ResponseStreamingEvent.LastChunkReceived(copy)
|
|
||||||
)
|
|
||||||
}
|
|
||||||
} ?: responseHandle.handleEvent(ResponseStreamingEvent.NOT_FOUND)
|
|
||||||
}
|
|
||||||
} catch (ex: Throwable) {
|
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.ExceptionCaught(ex))
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun put(
|
override fun put(key: String, content: ByteBuf) =
|
||||||
key: String,
|
|
||||||
responseHandle: ResponseHandle,
|
|
||||||
alloc: ByteBufAllocator
|
|
||||||
): CompletableFuture<RequestHandle> {
|
|
||||||
return CompletableFuture.completedFuture(object : RequestHandle {
|
|
||||||
val buf = alloc.heapBuffer()
|
|
||||||
val stream = ByteBufOutputStream(buf).let {
|
|
||||||
if (compressionEnabled) {
|
|
||||||
val deflater = Deflater(compressionLevel)
|
|
||||||
DeflaterOutputStream(it, deflater)
|
|
||||||
} else {
|
|
||||||
it
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
override fun handleEvent(evt: RequestStreamingEvent) {
|
|
||||||
when (evt) {
|
|
||||||
is RequestStreamingEvent.ChunkReceived -> {
|
|
||||||
evt.chunk.readBytes(stream, evt.chunk.readableBytes())
|
|
||||||
if (evt is RequestStreamingEvent.LastChunkReceived) {
|
|
||||||
(digestAlgorithm
|
(digestAlgorithm
|
||||||
?.let(MessageDigest::getInstance)
|
?.let(MessageDigest::getInstance)
|
||||||
?.let { md ->
|
?.let { md ->
|
||||||
digestString(key.toByteArray(), md)
|
digestString(key.toByteArray(), md)
|
||||||
} ?: key
|
} ?: key).let { digest ->
|
||||||
).let { digest ->
|
content.retain()
|
||||||
val oldSize = map.put(digest, buf.retain())?.let { old ->
|
val value = if (compressionEnabled) {
|
||||||
val result = old.readableBytes()
|
val deflater = Deflater(compressionLevel)
|
||||||
old.release()
|
val buf = content.alloc().buffer()
|
||||||
result
|
buf.retain()
|
||||||
} ?: 0
|
DeflaterOutputStream(ByteBufOutputStream(buf), deflater).use { outputStream ->
|
||||||
val delta = buf.readableBytes() - oldSize
|
ByteBufInputStream(content).use { inputStream ->
|
||||||
|
JWO.copy(inputStream, outputStream)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
buf
|
||||||
|
} else {
|
||||||
|
content
|
||||||
|
}
|
||||||
|
val old = map.put(digest, value)
|
||||||
|
val delta = value.readableBytes() - (old?.readableBytes() ?: 0)
|
||||||
var newSize = size.updateAndGet { currentSize : Long ->
|
var newSize = size.updateAndGet { currentSize : Long ->
|
||||||
currentSize + delta
|
currentSize + delta
|
||||||
}
|
}
|
||||||
removalQueue.put(RemovalQueueElement(digest, buf, Instant.now().plus(maxAge)))
|
removalQueue.put(RemovalQueueElement(digest, value.retain(), Instant.now().plus(maxAge)))
|
||||||
while(newSize > maxSize) {
|
while(newSize > maxSize) {
|
||||||
newSize = removeEldest()
|
newSize = removeEldest()
|
||||||
}
|
}
|
||||||
stream.close()
|
}.let {
|
||||||
responseHandle.handleEvent(ResponseStreamingEvent.RESPONSE_RECEIVED)
|
CompletableFuture.completedFuture<Void>(null)
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
is RequestStreamingEvent.ExceptionCaught -> {
|
|
||||||
stream.close()
|
|
||||||
}
|
|
||||||
|
|
||||||
else -> {
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
}
|
}
|
@@ -10,15 +10,13 @@ data class InMemoryCacheConfiguration(
|
|||||||
val digestAlgorithm : String?,
|
val digestAlgorithm : String?,
|
||||||
val compressionEnabled: Boolean,
|
val compressionEnabled: Boolean,
|
||||||
val compressionLevel: Int,
|
val compressionLevel: Int,
|
||||||
val chunkSize : Int
|
|
||||||
) : Configuration.Cache {
|
) : Configuration.Cache {
|
||||||
override fun materialize() = InMemoryCache(
|
override fun materialize() = InMemoryCache(
|
||||||
maxAge,
|
maxAge,
|
||||||
maxSize,
|
maxSize,
|
||||||
digestAlgorithm,
|
digestAlgorithm,
|
||||||
compressionEnabled,
|
compressionEnabled,
|
||||||
compressionLevel,
|
compressionLevel
|
||||||
chunkSize
|
|
||||||
)
|
)
|
||||||
|
|
||||||
override fun getNamespaceURI() = RBCS.RBCS_NAMESPACE_URI
|
override fun getNamespaceURI() = RBCS.RBCS_NAMESPACE_URI
|
||||||
|
@@ -31,16 +31,13 @@ class InMemoryCacheProvider : CacheProvider<InMemoryCacheConfiguration> {
|
|||||||
?.let(String::toInt)
|
?.let(String::toInt)
|
||||||
?: Deflater.DEFAULT_COMPRESSION
|
?: Deflater.DEFAULT_COMPRESSION
|
||||||
val digestAlgorithm = el.renderAttribute("digest") ?: "MD5"
|
val digestAlgorithm = el.renderAttribute("digest") ?: "MD5"
|
||||||
val chunkSize = el.renderAttribute("chunk-size")
|
|
||||||
?.let(Integer::decode)
|
|
||||||
?: 0x4000
|
|
||||||
return InMemoryCacheConfiguration(
|
return InMemoryCacheConfiguration(
|
||||||
maxAge,
|
maxAge,
|
||||||
maxSize,
|
maxSize,
|
||||||
digestAlgorithm,
|
digestAlgorithm,
|
||||||
enableCompression,
|
enableCompression,
|
||||||
compressionLevel,
|
compressionLevel
|
||||||
chunkSize
|
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -60,7 +57,6 @@ class InMemoryCacheProvider : CacheProvider<InMemoryCacheConfiguration> {
|
|||||||
}?.let {
|
}?.let {
|
||||||
attr("compression-level", it.toString())
|
attr("compression-level", it.toString())
|
||||||
}
|
}
|
||||||
attr("chunk-size", chunkSize.toString())
|
|
||||||
}
|
}
|
||||||
result
|
result
|
||||||
}
|
}
|
||||||
|
@@ -124,7 +124,7 @@ object Parser {
|
|||||||
val writeIdleTimeout = child.renderAttribute("write-idle-timeout")
|
val writeIdleTimeout = child.renderAttribute("write-idle-timeout")
|
||||||
?.let(Duration::parse) ?: Duration.of(60, ChronoUnit.SECONDS)
|
?.let(Duration::parse) ?: Duration.of(60, ChronoUnit.SECONDS)
|
||||||
val maxRequestSize = child.renderAttribute("max-request-size")
|
val maxRequestSize = child.renderAttribute("max-request-size")
|
||||||
?.let(Integer::decode) ?: 0x4000000
|
?.let(String::toInt) ?: 67108864
|
||||||
connection = Configuration.Connection(
|
connection = Configuration.Connection(
|
||||||
readTimeout,
|
readTimeout,
|
||||||
writeTimeout,
|
writeTimeout,
|
||||||
|
@@ -17,8 +17,6 @@ import net.woggioni.rbcs.api.exception.CacheException
|
|||||||
import net.woggioni.rbcs.api.exception.ContentTooLargeException
|
import net.woggioni.rbcs.api.exception.ContentTooLargeException
|
||||||
import net.woggioni.rbcs.common.contextLogger
|
import net.woggioni.rbcs.common.contextLogger
|
||||||
import net.woggioni.rbcs.common.debug
|
import net.woggioni.rbcs.common.debug
|
||||||
import java.net.SocketException
|
|
||||||
import javax.net.ssl.SSLException
|
|
||||||
import javax.net.ssl.SSLPeerUnverifiedException
|
import javax.net.ssl.SSLPeerUnverifiedException
|
||||||
|
|
||||||
@ChannelHandler.Sharable
|
@ChannelHandler.Sharable
|
||||||
@@ -52,12 +50,7 @@ class ExceptionHandler : ChannelDuplexHandler() {
|
|||||||
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
||||||
when (cause) {
|
when (cause) {
|
||||||
is DecoderException -> {
|
is DecoderException -> {
|
||||||
log.debug(cause.message, cause)
|
log.error(cause.message, cause)
|
||||||
ctx.close()
|
|
||||||
}
|
|
||||||
|
|
||||||
is SocketException -> {
|
|
||||||
log.debug(cause.message, cause)
|
|
||||||
ctx.close()
|
ctx.close()
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -66,16 +59,10 @@ class ExceptionHandler : ChannelDuplexHandler() {
|
|||||||
.addListener(ChannelFutureListener.CLOSE_ON_FAILURE)
|
.addListener(ChannelFutureListener.CLOSE_ON_FAILURE)
|
||||||
}
|
}
|
||||||
|
|
||||||
is SSLException -> {
|
|
||||||
log.debug(cause.message, cause)
|
|
||||||
ctx.close()
|
|
||||||
}
|
|
||||||
|
|
||||||
is ContentTooLargeException -> {
|
is ContentTooLargeException -> {
|
||||||
ctx.writeAndFlush(TOO_BIG.retainedDuplicate())
|
ctx.writeAndFlush(TOO_BIG.retainedDuplicate())
|
||||||
.addListener(ChannelFutureListener.CLOSE_ON_FAILURE)
|
.addListener(ChannelFutureListener.CLOSE_ON_FAILURE)
|
||||||
}
|
}
|
||||||
|
|
||||||
is ReadTimeoutException -> {
|
is ReadTimeoutException -> {
|
||||||
log.debug {
|
log.debug {
|
||||||
val channelId = ctx.channel().id().asShortText()
|
val channelId = ctx.channel().id().asShortText()
|
||||||
@@ -83,7 +70,6 @@ class ExceptionHandler : ChannelDuplexHandler() {
|
|||||||
}
|
}
|
||||||
ctx.close()
|
ctx.close()
|
||||||
}
|
}
|
||||||
|
|
||||||
is WriteTimeoutException -> {
|
is WriteTimeoutException -> {
|
||||||
log.debug {
|
log.debug {
|
||||||
val channelId = ctx.channel().id().asShortText()
|
val channelId = ctx.channel().id().asShortText()
|
||||||
@@ -91,13 +77,11 @@ class ExceptionHandler : ChannelDuplexHandler() {
|
|||||||
}
|
}
|
||||||
ctx.close()
|
ctx.close()
|
||||||
}
|
}
|
||||||
|
|
||||||
is CacheException -> {
|
is CacheException -> {
|
||||||
log.error(cause.message, cause)
|
log.error(cause.message, cause)
|
||||||
ctx.writeAndFlush(NOT_AVAILABLE.retainedDuplicate())
|
ctx.writeAndFlush(NOT_AVAILABLE.retainedDuplicate())
|
||||||
.addListener(ChannelFutureListener.CLOSE_ON_FAILURE)
|
.addListener(ChannelFutureListener.CLOSE_ON_FAILURE)
|
||||||
}
|
}
|
||||||
|
|
||||||
else -> {
|
else -> {
|
||||||
log.error(cause.message, cause)
|
log.error(cause.message, cause)
|
||||||
ctx.writeAndFlush(SERVER_ERROR.retainedDuplicate())
|
ctx.writeAndFlush(SERVER_ERROR.retainedDuplicate())
|
||||||
|
@@ -2,66 +2,34 @@ package net.woggioni.rbcs.server.handler
|
|||||||
|
|
||||||
import io.netty.buffer.Unpooled
|
import io.netty.buffer.Unpooled
|
||||||
import io.netty.channel.ChannelFutureListener
|
import io.netty.channel.ChannelFutureListener
|
||||||
|
import io.netty.channel.ChannelHandler
|
||||||
import io.netty.channel.ChannelHandlerContext
|
import io.netty.channel.ChannelHandlerContext
|
||||||
import io.netty.channel.DefaultFileRegion
|
import io.netty.channel.DefaultFileRegion
|
||||||
import io.netty.channel.SimpleChannelInboundHandler
|
import io.netty.channel.SimpleChannelInboundHandler
|
||||||
import io.netty.handler.codec.http.DefaultFullHttpResponse
|
import io.netty.handler.codec.http.DefaultFullHttpResponse
|
||||||
import io.netty.handler.codec.http.DefaultHttpContent
|
|
||||||
import io.netty.handler.codec.http.DefaultHttpResponse
|
import io.netty.handler.codec.http.DefaultHttpResponse
|
||||||
import io.netty.handler.codec.http.DefaultLastHttpContent
|
import io.netty.handler.codec.http.FullHttpRequest
|
||||||
import io.netty.handler.codec.http.HttpContent
|
|
||||||
import io.netty.handler.codec.http.HttpHeaderNames
|
import io.netty.handler.codec.http.HttpHeaderNames
|
||||||
import io.netty.handler.codec.http.HttpHeaderValues
|
import io.netty.handler.codec.http.HttpHeaderValues
|
||||||
import io.netty.handler.codec.http.HttpMethod
|
import io.netty.handler.codec.http.HttpMethod
|
||||||
import io.netty.handler.codec.http.HttpObject
|
|
||||||
import io.netty.handler.codec.http.HttpRequest
|
|
||||||
import io.netty.handler.codec.http.HttpResponseStatus
|
import io.netty.handler.codec.http.HttpResponseStatus
|
||||||
import io.netty.handler.codec.http.HttpUtil
|
import io.netty.handler.codec.http.HttpUtil
|
||||||
import io.netty.handler.codec.http.LastHttpContent
|
import io.netty.handler.codec.http.LastHttpContent
|
||||||
|
import io.netty.handler.stream.ChunkedNioStream
|
||||||
import net.woggioni.rbcs.api.Cache
|
import net.woggioni.rbcs.api.Cache
|
||||||
import net.woggioni.rbcs.api.RequestHandle
|
|
||||||
import net.woggioni.rbcs.api.ResponseHandle
|
|
||||||
import net.woggioni.rbcs.api.event.RequestStreamingEvent
|
|
||||||
import net.woggioni.rbcs.api.event.ResponseStreamingEvent
|
|
||||||
import net.woggioni.rbcs.common.contextLogger
|
import net.woggioni.rbcs.common.contextLogger
|
||||||
import net.woggioni.rbcs.common.debug
|
|
||||||
import net.woggioni.rbcs.server.debug
|
import net.woggioni.rbcs.server.debug
|
||||||
import net.woggioni.rbcs.server.warn
|
import net.woggioni.rbcs.server.warn
|
||||||
|
import java.nio.channels.FileChannel
|
||||||
import java.nio.file.Path
|
import java.nio.file.Path
|
||||||
import java.util.concurrent.CompletableFuture
|
|
||||||
|
|
||||||
|
@ChannelHandler.Sharable
|
||||||
class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
|
class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
|
||||||
SimpleChannelInboundHandler<HttpObject>() {
|
SimpleChannelInboundHandler<FullHttpRequest>() {
|
||||||
|
|
||||||
private val log = contextLogger()
|
private val log = contextLogger()
|
||||||
|
|
||||||
override fun channelRead0(ctx: ChannelHandlerContext, msg: HttpObject) {
|
override fun channelRead0(ctx: ChannelHandlerContext, msg: FullHttpRequest) {
|
||||||
when(msg) {
|
|
||||||
is HttpRequest -> handleRequest(ctx, msg)
|
|
||||||
is HttpContent -> handleContent(msg)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
private var requestHandle : CompletableFuture<RequestHandle?> = CompletableFuture.completedFuture(null)
|
|
||||||
|
|
||||||
private fun handleContent(content : HttpContent) {
|
|
||||||
content.retain()
|
|
||||||
requestHandle.thenAccept { handle ->
|
|
||||||
handle?.let {
|
|
||||||
val evt = if(content is LastHttpContent) {
|
|
||||||
RequestStreamingEvent.LastChunkReceived(content.content())
|
|
||||||
|
|
||||||
} else {
|
|
||||||
RequestStreamingEvent.ChunkReceived(content.content())
|
|
||||||
}
|
|
||||||
it.handleEvent(evt)
|
|
||||||
content.release()
|
|
||||||
} ?: content.release()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
private fun handleRequest(ctx : ChannelHandlerContext, msg : HttpRequest) {
|
|
||||||
val keepAlive: Boolean = HttpUtil.isKeepAlive(msg)
|
val keepAlive: Boolean = HttpUtil.isKeepAlive(msg)
|
||||||
val method = msg.method()
|
val method = msg.method()
|
||||||
if (method === HttpMethod.GET) {
|
if (method === HttpMethod.GET) {
|
||||||
@@ -74,44 +42,24 @@ class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
|
|||||||
return
|
return
|
||||||
}
|
}
|
||||||
if (serverPrefix == prefix) {
|
if (serverPrefix == prefix) {
|
||||||
val responseHandle = ResponseHandle { evt ->
|
cache.get(key).thenApply { channel ->
|
||||||
when (evt) {
|
if(channel != null) {
|
||||||
is ResponseStreamingEvent.ResponseReceived -> {
|
log.debug(ctx) {
|
||||||
|
"Cache hit for key '$key'"
|
||||||
|
}
|
||||||
val response = DefaultHttpResponse(msg.protocolVersion(), HttpResponseStatus.OK)
|
val response = DefaultHttpResponse(msg.protocolVersion(), HttpResponseStatus.OK)
|
||||||
response.headers()[HttpHeaderNames.CONTENT_TYPE] = HttpHeaderValues.APPLICATION_OCTET_STREAM
|
response.headers()[HttpHeaderNames.CONTENT_TYPE] = HttpHeaderValues.APPLICATION_OCTET_STREAM
|
||||||
if (!keepAlive) {
|
if (!keepAlive) {
|
||||||
response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE)
|
response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE)
|
||||||
|
response.headers().set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.IDENTITY)
|
||||||
} else {
|
} else {
|
||||||
response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE)
|
response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE)
|
||||||
}
|
|
||||||
response.headers().set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED)
|
response.headers().set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED)
|
||||||
ctx.writeAndFlush(response)
|
|
||||||
}
|
}
|
||||||
|
ctx.write(response)
|
||||||
is ResponseStreamingEvent.LastChunkReceived -> {
|
when (channel) {
|
||||||
val channelFuture = ctx.writeAndFlush(DefaultLastHttpContent(evt.chunk))
|
is FileChannel -> {
|
||||||
if (!keepAlive) {
|
val content = DefaultFileRegion(channel, 0, channel.size())
|
||||||
channelFuture
|
|
||||||
.addListener(ChannelFutureListener.CLOSE)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
is ResponseStreamingEvent.ChunkReceived -> {
|
|
||||||
ctx.writeAndFlush(DefaultHttpContent(evt.chunk))
|
|
||||||
}
|
|
||||||
|
|
||||||
is ResponseStreamingEvent.ExceptionCaught -> {
|
|
||||||
ctx.fireExceptionCaught(evt.exception)
|
|
||||||
}
|
|
||||||
|
|
||||||
is ResponseStreamingEvent.NotFound -> {
|
|
||||||
val response = DefaultFullHttpResponse(msg.protocolVersion(), HttpResponseStatus.NOT_FOUND)
|
|
||||||
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = 0
|
|
||||||
ctx.writeAndFlush(response)
|
|
||||||
}
|
|
||||||
|
|
||||||
is ResponseStreamingEvent.FileReceived -> {
|
|
||||||
val content = DefaultFileRegion(evt.file, 0, evt.file.size())
|
|
||||||
if (keepAlive) {
|
if (keepAlive) {
|
||||||
ctx.write(content)
|
ctx.write(content)
|
||||||
ctx.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT.retainedDuplicate())
|
ctx.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT.retainedDuplicate())
|
||||||
@@ -120,9 +68,28 @@ class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
|
|||||||
.addListener(ChannelFutureListener.CLOSE)
|
.addListener(ChannelFutureListener.CLOSE)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
else -> {
|
||||||
|
val content = ChunkedNioStream(channel)
|
||||||
|
if (keepAlive) {
|
||||||
|
ctx.write(content).addListener {
|
||||||
|
content.close()
|
||||||
|
}
|
||||||
|
ctx.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT.retainedDuplicate())
|
||||||
|
} else {
|
||||||
|
ctx.writeAndFlush(content)
|
||||||
|
.addListener(ChannelFutureListener.CLOSE)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
cache.get(key, responseHandle, ctx.alloc())
|
}
|
||||||
|
} else {
|
||||||
|
log.debug(ctx) {
|
||||||
|
"Cache miss for key '$key'"
|
||||||
|
}
|
||||||
|
val response = DefaultFullHttpResponse(msg.protocolVersion(), HttpResponseStatus.NOT_FOUND)
|
||||||
|
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = 0
|
||||||
|
ctx.writeAndFlush(response)
|
||||||
|
}
|
||||||
|
}.whenComplete { _, ex -> ex?.let(ctx::fireExceptionCaught) }
|
||||||
} else {
|
} else {
|
||||||
log.warn(ctx) {
|
log.warn(ctx) {
|
||||||
"Got request for unhandled path '${msg.uri()}'"
|
"Got request for unhandled path '${msg.uri()}'"
|
||||||
@@ -140,32 +107,15 @@ class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
|
|||||||
log.debug(ctx) {
|
log.debug(ctx) {
|
||||||
"Added value for key '$key' to build cache"
|
"Added value for key '$key' to build cache"
|
||||||
}
|
}
|
||||||
val responseHandle = ResponseHandle { evt ->
|
cache.put(key, msg.content()).thenRun {
|
||||||
when (evt) {
|
|
||||||
is ResponseStreamingEvent.ResponseReceived -> {
|
|
||||||
val response = DefaultFullHttpResponse(
|
val response = DefaultFullHttpResponse(
|
||||||
msg.protocolVersion(), HttpResponseStatus.CREATED,
|
msg.protocolVersion(), HttpResponseStatus.CREATED,
|
||||||
Unpooled.copiedBuffer(key.toByteArray())
|
Unpooled.copiedBuffer(key.toByteArray())
|
||||||
)
|
)
|
||||||
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = response.content().readableBytes()
|
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = response.content().readableBytes()
|
||||||
ctx.writeAndFlush(response)
|
ctx.writeAndFlush(response)
|
||||||
this.requestHandle = CompletableFuture.completedFuture(null)
|
}.whenComplete { _, ex ->
|
||||||
}
|
|
||||||
is ResponseStreamingEvent.ChunkReceived -> {
|
|
||||||
evt.chunk.release()
|
|
||||||
}
|
|
||||||
is ResponseStreamingEvent.ExceptionCaught -> {
|
|
||||||
ctx.fireExceptionCaught(evt.exception)
|
|
||||||
}
|
|
||||||
else -> {}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
this.requestHandle = cache.put(key, responseHandle, ctx.alloc()).exceptionally { ex ->
|
|
||||||
ctx.fireExceptionCaught(ex)
|
ctx.fireExceptionCaught(ex)
|
||||||
null
|
|
||||||
}.also {
|
|
||||||
log.debug { "Replacing request handle with $it"}
|
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
log.warn(ctx) {
|
log.warn(ctx) {
|
||||||
@@ -175,12 +125,9 @@ class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
|
|||||||
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = "0"
|
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = "0"
|
||||||
ctx.writeAndFlush(response)
|
ctx.writeAndFlush(response)
|
||||||
}
|
}
|
||||||
} else if (method == HttpMethod.TRACE) {
|
} else if(method == HttpMethod.TRACE) {
|
||||||
val replayedRequestHead = ctx.alloc().buffer()
|
val replayedRequestHead = ctx.alloc().buffer()
|
||||||
replayedRequestHead.writeCharSequence(
|
replayedRequestHead.writeCharSequence("TRACE ${Path.of(msg.uri())} ${msg.protocolVersion().text()}\r\n", Charsets.US_ASCII)
|
||||||
"TRACE ${Path.of(msg.uri())} ${msg.protocolVersion().text()}\r\n",
|
|
||||||
Charsets.US_ASCII
|
|
||||||
)
|
|
||||||
msg.headers().forEach { (key, value) ->
|
msg.headers().forEach { (key, value) ->
|
||||||
replayedRequestHead.apply {
|
replayedRequestHead.apply {
|
||||||
writeCharSequence(key, Charsets.US_ASCII)
|
writeCharSequence(key, Charsets.US_ASCII)
|
||||||
@@ -190,24 +137,16 @@ class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
replayedRequestHead.writeCharSequence("\r\n", Charsets.US_ASCII)
|
replayedRequestHead.writeCharSequence("\r\n", Charsets.US_ASCII)
|
||||||
this.requestHandle = CompletableFuture.completedFuture(RequestHandle { evt ->
|
val requestBody = msg.content()
|
||||||
when(evt) {
|
requestBody.retain()
|
||||||
is RequestStreamingEvent.LastChunkReceived -> {
|
val responseBody = ctx.alloc().compositeBuffer(2).apply {
|
||||||
ctx.writeAndFlush(DefaultLastHttpContent(evt.chunk.retain()))
|
addComponents(true, replayedRequestHead)
|
||||||
this.requestHandle = CompletableFuture.completedFuture(null)
|
addComponents(true, requestBody)
|
||||||
}
|
}
|
||||||
is RequestStreamingEvent.ChunkReceived -> ctx.writeAndFlush(DefaultHttpContent(evt.chunk.retain()))
|
val response = DefaultFullHttpResponse(msg.protocolVersion(), HttpResponseStatus.OK, responseBody)
|
||||||
is RequestStreamingEvent.ExceptionCaught -> ctx.fireExceptionCaught(evt.exception)
|
|
||||||
else -> {
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}).also {
|
|
||||||
log.debug { "Replacing request handle with $it"}
|
|
||||||
}
|
|
||||||
val response = DefaultHttpResponse(msg.protocolVersion(), HttpResponseStatus.OK)
|
|
||||||
response.headers().apply {
|
response.headers().apply {
|
||||||
set(HttpHeaderNames.CONTENT_TYPE, "message/http")
|
set(HttpHeaderNames.CONTENT_TYPE, "message/http")
|
||||||
|
set(HttpHeaderNames.CONTENT_LENGTH, responseBody.readableBytes())
|
||||||
}
|
}
|
||||||
ctx.writeAndFlush(response)
|
ctx.writeAndFlush(response)
|
||||||
} else {
|
} else {
|
||||||
@@ -219,11 +158,4 @@ class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
|
|||||||
ctx.writeAndFlush(response)
|
ctx.writeAndFlush(response)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
|
||||||
requestHandle.thenAccept { handle ->
|
|
||||||
handle?.handleEvent(RequestStreamingEvent.ExceptionCaught(cause))
|
|
||||||
}
|
|
||||||
super.exceptionCaught(ctx, cause)
|
|
||||||
}
|
|
||||||
}
|
}
|
@@ -1,11 +1,10 @@
|
|||||||
package net.woggioni.rbcs.server.throttling
|
package net.woggioni.rbcs.server.throttling
|
||||||
|
|
||||||
|
import io.netty.channel.ChannelHandler.Sharable
|
||||||
import io.netty.channel.ChannelHandlerContext
|
import io.netty.channel.ChannelHandlerContext
|
||||||
import io.netty.channel.ChannelInboundHandlerAdapter
|
import io.netty.channel.ChannelInboundHandlerAdapter
|
||||||
import io.netty.handler.codec.http.DefaultFullHttpResponse
|
import io.netty.handler.codec.http.DefaultFullHttpResponse
|
||||||
import io.netty.handler.codec.http.HttpContent
|
|
||||||
import io.netty.handler.codec.http.HttpHeaderNames
|
import io.netty.handler.codec.http.HttpHeaderNames
|
||||||
import io.netty.handler.codec.http.HttpRequest
|
|
||||||
import io.netty.handler.codec.http.HttpResponseStatus
|
import io.netty.handler.codec.http.HttpResponseStatus
|
||||||
import io.netty.handler.codec.http.HttpVersion
|
import io.netty.handler.codec.http.HttpVersion
|
||||||
import net.woggioni.rbcs.api.Configuration
|
import net.woggioni.rbcs.api.Configuration
|
||||||
@@ -19,19 +18,15 @@ import java.time.temporal.ChronoUnit
|
|||||||
import java.util.concurrent.TimeUnit
|
import java.util.concurrent.TimeUnit
|
||||||
|
|
||||||
|
|
||||||
class ThrottlingHandler(cfg: Configuration) : ChannelInboundHandlerAdapter() {
|
@Sharable
|
||||||
|
class ThrottlingHandler(cfg: Configuration) :
|
||||||
|
ChannelInboundHandlerAdapter() {
|
||||||
|
|
||||||
private companion object {
|
|
||||||
@JvmStatic
|
|
||||||
private val log = contextLogger()
|
private val log = contextLogger()
|
||||||
}
|
|
||||||
|
|
||||||
private val bucketManager = BucketManager.from(cfg)
|
private val bucketManager = BucketManager.from(cfg)
|
||||||
|
|
||||||
private val connectionConfiguration = cfg.connection
|
private val connectionConfiguration = cfg.connection
|
||||||
|
|
||||||
private var queuedContent : MutableList<HttpContent>? = null
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* If the suggested waiting time from the bucket is lower than this
|
* If the suggested waiting time from the bucket is lower than this
|
||||||
* amount, then the server will simply wait by itself before sending a response
|
* amount, then the server will simply wait by itself before sending a response
|
||||||
@@ -43,10 +38,7 @@ class ThrottlingHandler(cfg: Configuration) : ChannelInboundHandlerAdapter() {
|
|||||||
connectionConfiguration.writeIdleTimeout
|
connectionConfiguration.writeIdleTimeout
|
||||||
).dividedBy(2)
|
).dividedBy(2)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
override fun channelRead(ctx: ChannelHandlerContext, msg: Any) {
|
override fun channelRead(ctx: ChannelHandlerContext, msg: Any) {
|
||||||
if(msg is HttpRequest) {
|
|
||||||
val buckets = mutableListOf<Bucket>()
|
val buckets = mutableListOf<Bucket>()
|
||||||
val user = ctx.channel().attr(RemoteBuildCacheServer.userAttribute).get()
|
val user = ctx.channel().attr(RemoteBuildCacheServer.userAttribute).get()
|
||||||
if (user != null) {
|
if (user != null) {
|
||||||
@@ -62,19 +54,13 @@ class ThrottlingHandler(cfg: Configuration) : ChannelInboundHandlerAdapter() {
|
|||||||
bucketManager.getBucketByAddress(ctx.channel().remoteAddress() as InetSocketAddress)?.let(buckets::add)
|
bucketManager.getBucketByAddress(ctx.channel().remoteAddress() as InetSocketAddress)?.let(buckets::add)
|
||||||
}
|
}
|
||||||
if (buckets.isEmpty()) {
|
if (buckets.isEmpty()) {
|
||||||
super.channelRead(ctx, msg)
|
return super.channelRead(ctx, msg)
|
||||||
} else {
|
} else {
|
||||||
handleBuckets(buckets, ctx, msg, true)
|
handleBuckets(buckets, ctx, msg, true)
|
||||||
}
|
}
|
||||||
ctx.channel().id()
|
|
||||||
} else if(msg is HttpContent) {
|
|
||||||
queuedContent?.add(msg) ?: super.channelRead(ctx, msg)
|
|
||||||
} else {
|
|
||||||
super.channelRead(ctx, msg)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
private fun handleBuckets(buckets: List<Bucket>, ctx: ChannelHandlerContext, msg: Any, delayResponse: Boolean) {
|
private fun handleBuckets(buckets : List<Bucket>, ctx : ChannelHandlerContext, msg : Any, delayResponse : Boolean) {
|
||||||
var nextAttempt = -1L
|
var nextAttempt = -1L
|
||||||
for (bucket in buckets) {
|
for (bucket in buckets) {
|
||||||
val bucketNextAttempt = bucket.removeTokensWithEstimate(1)
|
val bucketNextAttempt = bucket.removeTokensWithEstimate(1)
|
||||||
@@ -82,18 +68,12 @@ class ThrottlingHandler(cfg: Configuration) : ChannelInboundHandlerAdapter() {
|
|||||||
nextAttempt = bucketNextAttempt
|
nextAttempt = bucketNextAttempt
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
if (nextAttempt < 0) {
|
if(nextAttempt < 0) {
|
||||||
super.channelRead(ctx, msg)
|
super.channelRead(ctx, msg)
|
||||||
queuedContent?.let {
|
return
|
||||||
for(content in it) {
|
|
||||||
super.channelRead(ctx, content)
|
|
||||||
}
|
}
|
||||||
queuedContent = null
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
val waitDuration = Duration.of(LongMath.ceilDiv(nextAttempt, 100_000_000L) * 100L, ChronoUnit.MILLIS)
|
val waitDuration = Duration.of(LongMath.ceilDiv(nextAttempt, 100_000_000L) * 100L, ChronoUnit.MILLIS)
|
||||||
if (delayResponse && waitDuration < waitThreshold) {
|
if (delayResponse && waitDuration < waitThreshold) {
|
||||||
this.queuedContent = mutableListOf()
|
|
||||||
ctx.executor().schedule({
|
ctx.executor().schedule({
|
||||||
handleBuckets(buckets, ctx, msg, false)
|
handleBuckets(buckets, ctx, msg, false)
|
||||||
}, waitDuration.toMillis(), TimeUnit.MILLISECONDS)
|
}, waitDuration.toMillis(), TimeUnit.MILLISECONDS)
|
||||||
@@ -101,7 +81,6 @@ class ThrottlingHandler(cfg: Configuration) : ChannelInboundHandlerAdapter() {
|
|||||||
sendThrottledResponse(ctx, waitDuration)
|
sendThrottledResponse(ctx, waitDuration)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
private fun sendThrottledResponse(ctx: ChannelHandlerContext, retryAfter: Duration) {
|
private fun sendThrottledResponse(ctx: ChannelHandlerContext, retryAfter: Duration) {
|
||||||
val response = DefaultFullHttpResponse(
|
val response = DefaultFullHttpResponse(
|
||||||
|
@@ -39,7 +39,7 @@
|
|||||||
<xs:attribute name="idle-timeout" type="xs:duration" use="optional" default="PT30S"/>
|
<xs:attribute name="idle-timeout" type="xs:duration" use="optional" default="PT30S"/>
|
||||||
<xs:attribute name="read-idle-timeout" type="xs:duration" use="optional" default="PT60S"/>
|
<xs:attribute name="read-idle-timeout" type="xs:duration" use="optional" default="PT60S"/>
|
||||||
<xs:attribute name="write-idle-timeout" type="xs:duration" use="optional" default="PT60S"/>
|
<xs:attribute name="write-idle-timeout" type="xs:duration" use="optional" default="PT60S"/>
|
||||||
<xs:attribute name="max-request-size" type="rbcs:byteSize" use="optional" default="0x4000000"/>
|
<xs:attribute name="max-request-size" type="xs:unsignedInt" use="optional" default="67108864"/>
|
||||||
</xs:complexType>
|
</xs:complexType>
|
||||||
|
|
||||||
<xs:complexType name="eventExecutorType">
|
<xs:complexType name="eventExecutorType">
|
||||||
@@ -52,11 +52,10 @@
|
|||||||
<xs:complexContent>
|
<xs:complexContent>
|
||||||
<xs:extension base="rbcs:cacheType">
|
<xs:extension base="rbcs:cacheType">
|
||||||
<xs:attribute name="max-age" type="xs:duration" default="P1D"/>
|
<xs:attribute name="max-age" type="xs:duration" default="P1D"/>
|
||||||
<xs:attribute name="max-size" type="rbcs:byteSize" default="0x1000000"/>
|
<xs:attribute name="max-size" type="xs:token" default="0x1000000"/>
|
||||||
<xs:attribute name="digest" type="xs:token" default="MD5"/>
|
<xs:attribute name="digest" type="xs:token" default="MD5"/>
|
||||||
<xs:attribute name="enable-compression" type="xs:boolean" default="true"/>
|
<xs:attribute name="enable-compression" type="xs:boolean" default="true"/>
|
||||||
<xs:attribute name="compression-level" type="xs:byte" default="-1"/>
|
<xs:attribute name="compression-level" type="xs:byte" default="-1"/>
|
||||||
<xs:attribute name="chunk-size" type="rbcs:byteSize" default="0x4000"/>
|
|
||||||
</xs:extension>
|
</xs:extension>
|
||||||
</xs:complexContent>
|
</xs:complexContent>
|
||||||
</xs:complexType>
|
</xs:complexType>
|
||||||
@@ -69,7 +68,6 @@
|
|||||||
<xs:attribute name="digest" type="xs:token" default="MD5"/>
|
<xs:attribute name="digest" type="xs:token" default="MD5"/>
|
||||||
<xs:attribute name="enable-compression" type="xs:boolean" default="true"/>
|
<xs:attribute name="enable-compression" type="xs:boolean" default="true"/>
|
||||||
<xs:attribute name="compression-level" type="xs:byte" default="-1"/>
|
<xs:attribute name="compression-level" type="xs:byte" default="-1"/>
|
||||||
<xs:attribute name="chunk-size" type="rbcs:byteSize" default="0x4000"/>
|
|
||||||
</xs:extension>
|
</xs:extension>
|
||||||
</xs:complexContent>
|
</xs:complexContent>
|
||||||
</xs:complexType>
|
</xs:complexType>
|
||||||
@@ -222,10 +220,5 @@
|
|||||||
<xs:attribute name="port" type="xs:unsignedShort" use="required"/>
|
<xs:attribute name="port" type="xs:unsignedShort" use="required"/>
|
||||||
</xs:complexType>
|
</xs:complexType>
|
||||||
|
|
||||||
<xs:simpleType name="byteSize">
|
|
||||||
<xs:restriction base="xs:token">
|
|
||||||
<xs:pattern value="(0x[a-f0-9]+|[0-9]+)"/>
|
|
||||||
</xs:restriction>
|
|
||||||
</xs:simpleType>
|
|
||||||
|
|
||||||
</xs:schema>
|
</xs:schema>
|
||||||
|
@@ -47,13 +47,11 @@ abstract class AbstractBasicAuthServerTest : AbstractServerTest() {
|
|||||||
),
|
),
|
||||||
users.asSequence().map { it.name to it}.toMap(),
|
users.asSequence().map { it.name to it}.toMap(),
|
||||||
sequenceOf(writersGroup, readersGroup).map { it.name to it}.toMap(),
|
sequenceOf(writersGroup, readersGroup).map { it.name to it}.toMap(),
|
||||||
FileSystemCacheConfiguration(
|
FileSystemCacheConfiguration(this.cacheDir,
|
||||||
this.cacheDir,
|
|
||||||
maxAge = Duration.ofSeconds(3600 * 24),
|
maxAge = Duration.ofSeconds(3600 * 24),
|
||||||
digestAlgorithm = "MD5",
|
digestAlgorithm = "MD5",
|
||||||
compressionLevel = Deflater.DEFAULT_COMPRESSION,
|
compressionLevel = Deflater.DEFAULT_COMPRESSION,
|
||||||
compressionEnabled = false,
|
compressionEnabled = false
|
||||||
chunkSize = 0x1000
|
|
||||||
),
|
),
|
||||||
Configuration.BasicAuthentication(),
|
Configuration.BasicAuthentication(),
|
||||||
null,
|
null,
|
||||||
|
@@ -156,8 +156,7 @@ abstract class AbstractTlsServerTest : AbstractServerTest() {
|
|||||||
maxAge = Duration.ofSeconds(3600 * 24),
|
maxAge = Duration.ofSeconds(3600 * 24),
|
||||||
compressionEnabled = true,
|
compressionEnabled = true,
|
||||||
compressionLevel = Deflater.DEFAULT_COMPRESSION,
|
compressionLevel = Deflater.DEFAULT_COMPRESSION,
|
||||||
digestAlgorithm = "MD5",
|
digestAlgorithm = "MD5"
|
||||||
chunkSize = 0x1000
|
|
||||||
),
|
),
|
||||||
// InMemoryCacheConfiguration(
|
// InMemoryCacheConfiguration(
|
||||||
// maxAge = Duration.ofSeconds(3600 * 24),
|
// maxAge = Duration.ofSeconds(3600 * 24),
|
||||||
|
@@ -86,7 +86,7 @@ class BasicAuthServerTest : AbstractBasicAuthServerTest() {
|
|||||||
@Test
|
@Test
|
||||||
@Order(4)
|
@Order(4)
|
||||||
fun putAsAWriterUser() {
|
fun putAsAWriterUser() {
|
||||||
val client: HttpClient = HttpClient.newBuilder().version(HttpClient.Version.HTTP_1_1).build()
|
val client: HttpClient = HttpClient.newHttpClient()
|
||||||
|
|
||||||
val (key, value) = keyValuePair
|
val (key, value) = keyValuePair
|
||||||
val user = cfg.users.values.find {
|
val user = cfg.users.values.find {
|
||||||
|
@@ -52,8 +52,7 @@ class NoAuthServerTest : AbstractServerTest() {
|
|||||||
compressionEnabled = true,
|
compressionEnabled = true,
|
||||||
digestAlgorithm = "MD5",
|
digestAlgorithm = "MD5",
|
||||||
compressionLevel = Deflater.DEFAULT_COMPRESSION,
|
compressionLevel = Deflater.DEFAULT_COMPRESSION,
|
||||||
maxSize = 0x1000000,
|
maxSize = 0x1000000
|
||||||
chunkSize = 0x1000
|
|
||||||
),
|
),
|
||||||
null,
|
null,
|
||||||
null,
|
null,
|
||||||
@@ -81,7 +80,7 @@ class NoAuthServerTest : AbstractServerTest() {
|
|||||||
@Test
|
@Test
|
||||||
@Order(1)
|
@Order(1)
|
||||||
fun putWithNoAuthorizationHeader() {
|
fun putWithNoAuthorizationHeader() {
|
||||||
val client: HttpClient = HttpClient.newBuilder().version(HttpClient.Version.HTTP_1_1).build()
|
val client: HttpClient = HttpClient.newHttpClient()
|
||||||
val (key, value) = keyValuePair
|
val (key, value) = keyValuePair
|
||||||
|
|
||||||
val requestBuilder = newRequestBuilder(key)
|
val requestBuilder = newRequestBuilder(key)
|
||||||
|
@@ -11,7 +11,7 @@
|
|||||||
idle-timeout="PT30M"
|
idle-timeout="PT30M"
|
||||||
max-request-size="101325"/>
|
max-request-size="101325"/>
|
||||||
<event-executor use-virtual-threads="false"/>
|
<event-executor use-virtual-threads="false"/>
|
||||||
<cache xs:type="rbcs:fileSystemCacheType" path="/tmp/rbcs" max-age="P7D" chunk-size="0xa910"/>
|
<cache xs:type="rbcs:fileSystemCacheType" path="/tmp/rbcs" max-age="P7D"/>
|
||||||
<authentication>
|
<authentication>
|
||||||
<none/>
|
<none/>
|
||||||
</authentication>
|
</authentication>
|
||||||
|
@@ -13,7 +13,7 @@
|
|||||||
read-timeout="PT5M"
|
read-timeout="PT5M"
|
||||||
write-timeout="PT5M"/>
|
write-timeout="PT5M"/>
|
||||||
<event-executor use-virtual-threads="true"/>
|
<event-executor use-virtual-threads="true"/>
|
||||||
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" max-size="16777216" compression-mode="deflate" chunk-size="123">
|
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" max-size="16777216" compression-mode="deflate">
|
||||||
<server host="memcached" port="11211"/>
|
<server host="memcached" port="11211"/>
|
||||||
</cache>
|
</cache>
|
||||||
<authorization>
|
<authorization>
|
||||||
|
@@ -12,7 +12,7 @@
|
|||||||
idle-timeout="PT30M"
|
idle-timeout="PT30M"
|
||||||
max-request-size="101325"/>
|
max-request-size="101325"/>
|
||||||
<event-executor use-virtual-threads="false"/>
|
<event-executor use-virtual-threads="false"/>
|
||||||
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" max-size="101325" digest="SHA-256" chunk-size="456">
|
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" max-size="101325" digest="SHA-256">
|
||||||
<server host="127.0.0.1" port="11211" max-connections="10" connection-timeout="PT20S"/>
|
<server host="127.0.0.1" port="11211" max-connections="10" connection-timeout="PT20S"/>
|
||||||
</cache>
|
</cache>
|
||||||
<authentication>
|
<authentication>
|
||||||
|
@@ -11,7 +11,7 @@
|
|||||||
idle-timeout="PT30M"
|
idle-timeout="PT30M"
|
||||||
max-request-size="4096"/>
|
max-request-size="4096"/>
|
||||||
<event-executor use-virtual-threads="false"/>
|
<event-executor use-virtual-threads="false"/>
|
||||||
<cache xs:type="rbcs:inMemoryCacheType" max-age="P7D" chunk-size="0xa91f"/>
|
<cache xs:type="rbcs:inMemoryCacheType" max-age="P7D"/>
|
||||||
<authorization>
|
<authorization>
|
||||||
<users>
|
<users>
|
||||||
<user name="user1" password="password1">
|
<user name="user1" password="password1">
|
||||||
|
@@ -29,6 +29,6 @@ include 'rbcs-api'
|
|||||||
include 'rbcs-common'
|
include 'rbcs-common'
|
||||||
include 'rbcs-server-memcache'
|
include 'rbcs-server-memcache'
|
||||||
include 'rbcs-cli'
|
include 'rbcs-cli'
|
||||||
|
include 'docker'
|
||||||
include 'rbcs-client'
|
include 'rbcs-client'
|
||||||
include 'rbcs-server'
|
include 'rbcs-server'
|
||||||
include 'docker'
|
|
||||||
|
Reference in New Issue
Block a user