Compare commits

...

16 Commits

Author SHA1 Message Date
37da03c719 added signal handler to native executable
All checks were successful
CI / build (push) Successful in 33m50s
2025-02-25 19:15:48 +08:00
60bc4375cf update lys-catalog version 2025-02-25 15:54:11 +08:00
725fe22b80 added server configuration file documentation 2025-02-25 15:31:26 +08:00
ca18b63f27 added GraalVM native image executable build 2025-02-25 15:30:58 +08:00
23f2a351a6 shared event executor group between server and clients
All checks were successful
CI / build (push) Successful in 3m44s
- improved documentation
- closed memcache client's thread pools
2025-02-24 13:52:20 +08:00
c7d2b89d82 fixed server prefix handling 2025-02-22 15:25:41 +08:00
72c34b57a6 fixed memory leak 2025-02-22 15:25:41 +08:00
619873c4a9 removed readTimeout and writeTimeout from server configuration
added Markdown documentation
2025-02-22 15:25:36 +08:00
591f6e2af4 parametrized password hashing algorithm for basic authentication 2025-02-20 16:46:15 +08:00
ad00ebee9b made logback configuration file overridable in Docker image without changing ENTRYPOINT
All checks were successful
CI / build (push) Successful in 3m30s
2025-02-20 13:39:27 +08:00
adf8a0cf24 added documentation for rbcs-servlet
All checks were successful
CI / build (push) Successful in 1m54s
switched Docker images to serial GC
2025-02-19 23:00:56 +08:00
42eb26a948 optimize imports 2025-02-19 22:40:14 +08:00
f048a60540 implemented streaming request/response streaming
added metadata to cache values

added cache servlet for comparison
2025-02-19 22:37:54 +08:00
0463038aaa first commit with streaming support (buggy and unreliable) 2025-02-13 23:02:08 +08:00
7eca8a270d 0.1.6 release
All checks were successful
CI / build (push) Successful in 3m29s
2025-02-08 00:54:25 +08:00
84d7c977f9 added randomizer to retries 2025-02-07 23:19:13 +08:00
105 changed files with 5362 additions and 1219 deletions

View File

@@ -57,6 +57,18 @@ jobs:
target: release-memcache
cache-from: type=registry,ref=gitea.woggioni.net/woggioni/rbcs:buildx
cache-to: type=registry,mode=max,compression=zstd,image-manifest=true,oci-mediatypes=true,ref=gitea.woggioni.net/woggioni/rbcs:buildx
-
name: Build rbcs memcache Docker image
uses: docker/build-push-action@v5.3.0
with:
context: "docker/build/docker"
platforms: linux/amd64
push: true
pull: true
tags: |
gitea.woggioni.net/woggioni/rbcs:native
gitea.woggioni.net/woggioni/rbcs:native-${{ steps.retrieve-version.outputs.VERSION }}
target: release-native
- name: Publish artifacts
env:
PUBLISHER_TOKEN: ${{ secrets.PUBLISHER_TOKEN }}

20
LICENSE Normal file
View File

@@ -0,0 +1,20 @@
The MIT License (MIT)
Copyright (c) 2017 Y. T. CHUNG <zonyitoo@gmail.com>
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

209
README.md
View File

@@ -0,0 +1,209 @@
# Remote Build Cache Server
Remote Build Cache Server (shortened to RBCS) allows you to share and reuse unchanged build
and test outputs across the team. This speeds up local and CI builds since cycles are not wasted
re-building components that are unaffected by new code changes. RBCS supports both Gradle and
Maven build tool environments.
It comes with pluggable storage backends, the core application offers in-memory storage or disk-backed storage,
in addition to this there is an official plugin to use memcached as the storage backend.
It supports HTTP basic authentication or, alternatively, TLS certificate authentication, role-based access control (RBAC),
and throttling.
## Quickstart
### Downloading the jar file
You can download the latest version from [this link](https://gitea.woggioni.net/woggioni/-/packages/maven/net.woggioni:rbcs-cli/)
Assuming you have Java 21 or later installed, you can launch the server directly with
```bash
java -jar rbcs-cli.jar server
```
By default it will start an HTTP server bound to localhost and listening on port 8080 with no authentication,
writing data to the disk, that you can use for testing
### Using the Docker image
You can pull the latest Docker image with
```bash
docker pull gitea.woggioni.net/woggioni/rbcs:latest
```
By default it will start an HTTP server bound to localhost and listening on port 8080 with no authentication,
writing data to the disk, that you can use for testing
## Usage
### Configuration
The location of the `rbcs.xml` configuration file depends on the operating system,
Alternatively it can be changed setting the `RBCS_CONFIGURATION_DIR` environmental variable or `net.woggioni.rbcs.conf.dir` Java system property
to the directory that contain the `rbcs.xml` file.
The server configuration file follows the XML format and uses XML schema for validation
(you can find the schema for the main configuration file [here](https://gitea.woggioni.net/woggioni/rbcs/src/branch/master/rbcs-server/src/main/resources/net/woggioni/rbcs/server/schema/rbcs.xsd)).
The configuration values are enclosed inside XML attribute and support system property / environmental variable interpolation.
As an example, you can configure RBCS to read the server port number from the `RBCS_SERVER_PORT` environmental variable
and the bind address from the `rbc.bind.address` JVM system property with.
Full documentation for all tags and attributes is available [here](doc/server_configuration.md).
### Plugins
If you want to use memcache as a storage backend you'll also need to download [the memcache plugin](https://gitea.woggioni.net/woggioni/-/packages/maven/net.woggioni:rbcs-server-memcache/)
Plugins need to be stored in a folder named `plugins` in the located server's working directory
(the directory where the server process is started). They are shipped as TAR archives, so you need to extract
the content of the archive into the `plugins` directory for the server to pick them up.
### Using RBCS with Gradle
Add this to the `settings.gradle` file of your project
```groovy
buildCache {
remote(HttpBuildCache) {
url = 'https://rbcs.example.com/'
push = true
allowInsecureProtocol = false
// The credentials block is only required if you enable
// HTTP basic authentication on RBCS
credentials {
username = 'build-cache-user'
password = 'some-complicated-password'
}
}
}
```
alternatively you can add this to `${GRADLE_HOME}/init.gradle` to configure the remote cache
at the system level
```groovy
gradle.settingsEvaluated { settings ->
settings.buildCache {
remote(HttpBuildCache) {
url = 'https://rbcs.example.com/'
push = true
allowInsecureProtocol = false
// The credentials block is only required if you enable
// HTTP basic authentication on RBCS
credentials {
username = 'build-cache-user'
password = 'some-complicated-password'
}
}
}
}
```
add `org.gradle.caching=true` to your `<project>/gradle.properties` or run gradle with `--build-cache`.
Read [Gradle documentation](https://docs.gradle.org/current/userguide/build_cache.html) for more detailed information.
### Using RBCS with Maven
1. Create an `extensions.xml` in `<project>/.mvn/extensions.xml` with the following content
```xml
<extensions xmlns="http://maven.apache.org/EXTENSIONS/1.1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/EXTENSIONS/1.1.0 https://maven.apache.org/xsd/core-extensions-1.0.0.xsd">
<extension>
<groupId>org.apache.maven.extensions</groupId>
<artifactId>maven-build-cache-extension</artifactId>
<version>1.2.0</version>
</extension>
</extensions>
```
2. Copy [maven-build-cache-config.xml](https://maven.apache.org/extensions/maven-build-cache-extension/maven-build-cache-config.xml) into `<project>/.mvn/` folder
3. Edit the `cache/configuration/remote` element
```xml
<remote enabled="true" id="rbcs">
<url>https://rbcs.example.com/</url>
</remote>
```
4. Run maven with
```bash
mvn -Dmaven.build.cache.enabled=true -Dmaven.build.cache.debugOutput=true -Dmaven.build.cache.remote.save.enabled=true package
```
Alternatively you can set those properties in your `<project>/pom.xml`
Read [here](https://maven.apache.org/extensions/maven-build-cache-extension/remote-cache.html)
for more informations
## FAQ
### Why should I use a build cache?
#### Build Caches Improve Build & Test Performance
Building software consists of a number of steps, like compiling sources, executing tests, and linking binaries. Weve seen that a binary artifact repository helps when such a step requires an external component by downloading the artifact from the repository rather than building it locally.
However, there are many additional steps in this build process which can be optimized to reduce the build time. An obvious strategy is to avoid executing build steps which dominate the total build time when these build steps are not needed.
Most build times are dominated by the testing step.
While binary repositories cannot capture the outcome of a test build step (only the test reports
when included in binary artifacts), build caches are designed to eliminate redundant executions
for every build step. Moreover, it generalizes the concept of avoiding work associated with any
incremental step of the build, including test execution, compilation and resource processing.
The mechanism itself is comparable to a pure function. That is, given some inputs such as source
files and environment parameters we know that the output is always going to be the same.
As a result, we can cache it and retrieve it based on a simple cryptographic hash of the inputs.
Build caching is supported natively by some build tools.
#### Improve CI builds with a remote build cache
When analyzing the role of a build cache it is important to take into account the granularity
of the changes that it caches. Imagine a full build for a project with 40 to 50 modules
which fails at the last step (deployment) because the staging environment is temporarily unavailable.
Although the vast majority of the build steps (potentially thousands) succeed,
the change can not be deployed to the staging environment.
Without a build cache one typically relies on a very complex CI configuration to reuse build step outputs
or would have to repeat the full build once the environment is available.
Some build tools dont support incremental builds properly. For example, outputs of a build started
from scratch may vary when compared to subsequent builds that rely on the initial builds output.
As a result, to preserve build integrity, its crucial to rebuild from scratch, or cleanly, in this
scenario.
With a build cache, only the last step needs to be executed and the build can be re-triggered
when the environment is back online. This automatically saves all of the time and
resources required across the different build steps which were successfully executed.
Instead of executing the intermediate steps, the build tool pulls the outputs from the build cache,
avoiding a lot of redundant work
#### Share outputs with a remote build cache
One of the most important advantages of a remote build cache is the ability to share build outputs.
In most CI configurations, for example, a number of pipelines are created.
These may include one for building the sources, one for testing, one for publishing the outcomes
to a remote repository, and other pipelines to test on different platforms.
There are even situations where CI builds partially build a project (i.e. some modules and not others).
Most of those pipelines share a lot of intermediate build steps. All builds which perform testing
require the binaries to be ready. All publishing builds require all previous steps to be executed.
And because modern CI infrastructure means executing everything in containerized (isolated) environments,
significant resources are wasted by repeatedly building the same intermediate artifacts.
A remote build cache greatly reduces this overhead by orders of magnitudes because it provides a way
for all those pipelines to share their outputs. After all, there is no point recreating an output that
is already available in the cache.
Because there are inherent dependencies between software components of a build,
introducing a build cache dramatically reduces the impact of exploding a component into multiple pieces,
allowing for increased modularity without increased overhead.
#### Make local developers more efficient with remote build caches
It is common for different teams within a company to work on different modules of a single large
application. In this case, most teams dont care about building the other parts of the software.
By introducing a remote cache developers immediately benefit from pre-built artifacts when checking out code.
Because it has already been built on CI, they dont have to do it locally.
Introducing a remote cache is a huge benefit for those developers. Consider that a typical developers
day begins by performing a code checkout. Most likely the checked out code has already been built on CI.
Therefore, no time is wasted running the first build of the day. The remote cache provides all of the
intermediate artifacts needed. And, in the event local changes are made, the remote cache still leverages
partial cache hits for projects which are independent. As other developers in the organization request
CI builds, the remote cache continues to populate, increasing the likelihood of these remote cache hits
across team members.

View File

@@ -14,9 +14,7 @@ allprojects { subproject ->
if(project.currentTag.isPresent()) {
version = project.currentTag.map { it[0] }.get()
} else {
version = project.gitRevision.map { gitRevision ->
"${getProperty('rbcs.version')}.${gitRevision[0..10]}"
}.get()
version = "${getProperty('rbcs.version')}-SNAPSHOT"
}
repositories {
@@ -24,7 +22,6 @@ allprojects { subproject ->
url = getProperty('gitea.maven.url')
content {
includeModule 'net.woggioni', 'jwo'
includeModule 'net.woggioni', 'xmemcached'
includeGroup 'com.lys'
}
}
@@ -41,7 +38,7 @@ allprojects { subproject ->
withSourcesJar()
modularity.inferModulePath = true
toolchain {
languageVersion = JavaLanguageVersion.of(21)
languageVersion = JavaLanguageVersion.of(23)
vendor = JvmVendorSpec.ORACLE
}
}

178
doc/server_configuration.md Normal file
View File

@@ -0,0 +1,178 @@
### RBCS server configuration file elements and attributes
#### Root Element: `server`
The root element that contains all server configuration.
**Attributes:**
- `path` (optional): URI path prefix for cache requests. Example: if set to "cache", requests would be made to "http://www.example.com/cache/KEY"
#### Child Elements
#### `<bind>`
Configures server socket settings.
**Attributes:**
- `host` (required): Server bind address
- `port` (required): Server port number
- `incoming-connections-backlog-size` (optional, default: 1024): Maximum queue length for incoming connection indications
#### `<connection>`
Configures connection handling parameters.
**Attributes:**
- `idle-timeout` (optional, default: PT30S): Connection timeout when no activity
- `read-idle-timeout` (optional, default: PT60S): Connection timeout when no reads
- `write-idle-timeout` (optional, default: PT60S): Connection timeout when no writes
- `max-request-size` (optional, default: 0x4000000): Maximum allowed request body size
#### `<event-executor>`
Configures event execution settings.
**Attributes:**
- `use-virtual-threads` (optional, default: true): Whether to use virtual threads for the server handler
#### `<cache>`
Defines cache storage implementation. Two types are available:
##### InMemory Cache
A simple storage backend that uses an hash map to store data in memory
**Attributes:**
- `max-age` (default: P1D): Cache entry lifetime
- `max-size` (default: 0x1000000): Maximum cache size in bytes
- `digest` (default: MD5): Key hashing algorithm
- `enable-compression` (default: true): Enable deflate compression
- `compression-level` (default: -1): Compression level (-1 to 9)
- `chunk-size` (default: 0x10000): Maximum socket write size
##### FileSystem Cache
A storage backend that stores data in a folder on the disk
**Attributes:**
- `path`: Storage directory path
- `max-age` (default: P1D): Cache entry lifetime
- `digest` (default: MD5): Key hashing algorithm
- `enable-compression` (default: true): Enable deflate compression
- `compression-level` (default: -1): Compression level
- `chunk-size` (default: 0x10000): Maximum in-memory cache value size
#### `<authorization>`
Configures user and group-based access control.
##### `<users>`
List of registered users.
- Contains `<user>` elements:
**Attributes:**
- `name` (required): Username
- `password` (optional): For basic authentication
- Can contain an `anonymous` element to allow for unauthenticated access
##### `<groups>`
List of user groups.
- Contains `<group>` elements:
**Attributes:**
- `name`: Group name
- Can contain:
- `users`: List of user references
- `roles`: List of roles (READER/WRITER)
- `user-quota`: Per-user quota
- `group-quota`: Group-wide quota
#### `<authentication>`
Configures authentication mechanism. Options:
- `<basic>`: HTTP basic authentication
- `<client-certificate>`: TLS certificate authentication, it uses attributes of the subject's X.500 name
to extract the username and group of the client.
Example:
```xml
<client-certificate>
<user-extractor attribute-name="CN" pattern="(.*)"/>
<group-extractor attribute-name="O" pattern="(.*)"/>
</client-certificate>
```
- `<none>`: No authentication
#### `<tls>`
Configures TLS encryption.
**Child Elements:**
- `<keystore>`: Server certificate configuration
**Attributes:**
- `file` (required): Keystore file path
- `password`: Keystore password
- `key-alias` (required): Private key alias
- `key-password`: Private key password
- `<truststore>`: Client certificate verification
**Attributes:**
- `file` (required): Truststore file path
- `password`: Truststore password
- `check-certificate-status`: Enable CRL/OCSP checking
- `require-client-certificate` (default: false): Require client certificates
----------------------------
# Complete configuration example
```xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<rbcs:server xmlns:xs="http://www.w3.org/2001/XMLSchema-instance"
xmlns:rbcs="urn:net.woggioni.rbcs.server"
xs:schemaLocation="urn:net.woggioni.rbcs.server jpms://net.woggioni.rbcs.server/net/woggioni/rbcs/server/schema/rbcs.xsd"
>
<bind host="0.0.0.0" port="8080" incoming-connections-backlog-size="1024"/>
<connection
max-request-size="67108864"
idle-timeout="PT10S"
read-idle-timeout="PT20S"
write-idle-timeout="PT20S"
read-timeout="PT5S"
write-timeout="PT5S"/>
<event-executor use-virtual-threads="true"/>
<cache xs:type="rbcs:inMemoryCacheType" max-age="P7D" enable-compression="false" max-size="0x10000000" />
<!--cache xs:type="rbcs:fileSystemCacheType" max-age="P7D" enable-compression="false" path="${sys:java.io.tmpdir}/rbcs"/-->
<authorization>
<users>
<user name="user1" password="II+qeNLft2pZ/JVNo9F7jpjM/BqEcfsJW27NZ6dPVs8tAwHbxrJppKYsbL7J/SMl">
<quota calls="100" period="PT1S"/>
</user>
<user name="user2" password="v6T9+q6/VNpvLknji3ixPiyz2YZCQMXj2FN7hvzbfc2Ig+IzAHO0iiBCH9oWuBDq"/>
<anonymous>
<quota calls="10" period="PT60S" initial-available-calls="10" max-available-calls="10"/>
</anonymous>
</users>
<groups>
<group name="readers">
<users>
<anonymous/>
</users>
<roles>
<reader/>
</roles>
</group>
<group name="writers">
<users>
<user ref="user1"/>
<user ref="user2"/>
</users>
<roles>
<reader/>
<writer/>
</roles>
</group>
</groups>
</authorization>
<authentication>
<basic/>
</authentication>
</rbcs:server>
```

View File

@@ -5,7 +5,7 @@ WORKDIR /home/luser
FROM base-release AS release
ADD rbcs-cli-envelope-*.jar rbcs.jar
ENTRYPOINT ["java", "-XX:+UseZGC", "-XX:+ZGenerational", "-jar", "/home/luser/rbcs.jar", "server"]
ENTRYPOINT ["java", "-XX:+UseSerialGC", "-XX:GCTimeRatio=24", "-jar", "/home/luser/rbcs.jar", "server"]
FROM base-release AS release-memcache
ADD --chown=luser:luser rbcs-cli-envelope-*.jar rbcs.jar
@@ -13,4 +13,9 @@ RUN mkdir plugins
WORKDIR /home/luser/plugins
RUN --mount=type=bind,source=.,target=/build/distributions tar -xf /build/distributions/rbcs-server-memcache*.tar
WORKDIR /home/luser
ENTRYPOINT ["java", "-XX:+UseZGC", "-XX:+ZGenerational", "-jar", "/home/luser/rbcs.jar", "server"]
ADD logback.xml .
ENTRYPOINT ["java", "-Dlogback.configurationFile=logback.xml", "-XX:+UseSerialGC", "-XX:GCTimeRatio=24", "-jar", "/home/luser/rbcs.jar", "server"]
FROM scratch AS release-native
ADD rbcs-cli.upx rbcs-cli
ENTRYPOINT ["./rbcs-cli"]

View File

@@ -30,6 +30,9 @@ Provider<Copy> prepareDockerBuild = tasks.register('prepareDockerBuild', Copy) {
into project.layout.buildDirectory.file('docker')
from(configurations.docker)
from(file('Dockerfile'))
from(rootProject.file('conf')) {
include 'logback.xml'
}
}
Provider<DockerBuildImage> dockerBuild = tasks.register('dockerBuildImage', DockerBuildImage) {
@@ -63,5 +66,3 @@ Provider<DockerPushImage> dockerPush = tasks.register('dockerPushImage', DockerP
}
images = [dockerTag.flatMap{ it.tag }, dockerTagMemcache.flatMap{ it.tag }]
}

View File

@@ -2,11 +2,10 @@ org.gradle.configuration-cache=false
org.gradle.parallel=true
org.gradle.caching=true
rbcs.version = 0.1.5
rbcs.version = 0.2.0
lys.version = 2025.02.05
lys.version = 2025.02.25
gitea.maven.url = https://gitea.woggioni.net/api/packages/woggioni/maven
docker.registry.url=gitea.woggioni.net
jpms-check.configurationName = runtimeClasspath

View File

@@ -5,7 +5,9 @@ plugins {
}
dependencies {
api catalog.netty.common
api catalog.netty.buffer
api catalog.netty.handler
}
publishing {

View File

@@ -2,6 +2,10 @@ module net.woggioni.rbcs.api {
requires static lombok;
requires java.xml;
requires io.netty.buffer;
requires io.netty.handler;
requires io.netty.transport;
requires io.netty.common;
exports net.woggioni.rbcs.api;
exports net.woggioni.rbcs.api.exception;
exports net.woggioni.rbcs.api.message;
}

View File

@@ -0,0 +1,13 @@
package net.woggioni.rbcs.api;
import java.util.concurrent.CompletableFuture;
public interface AsyncCloseable extends AutoCloseable {
CompletableFuture<Void> asyncClose();
@Override
default void close() throws Exception {
asyncClose().get();
}
}

View File

@@ -1,14 +0,0 @@
package net.woggioni.rbcs.api;
import io.netty.buffer.ByteBuf;
import net.woggioni.rbcs.api.exception.ContentTooLargeException;
import java.nio.channels.ReadableByteChannel;
import java.util.concurrent.CompletableFuture;
public interface Cache extends AutoCloseable {
CompletableFuture<ReadableByteChannel> get(String key);
CompletableFuture<Void> put(String key, ByteBuf content) throws ContentTooLargeException;
}

View File

@@ -0,0 +1,15 @@
package net.woggioni.rbcs.api;
import io.netty.channel.ChannelFactory;
import io.netty.channel.ChannelHandler;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.socket.DatagramChannel;
import io.netty.channel.socket.SocketChannel;
public interface CacheHandlerFactory extends AsyncCloseable {
ChannelHandler newHandler(
EventLoopGroup eventLoopGroup,
ChannelFactory<SocketChannel> socketChannelFactory,
ChannelFactory<DatagramChannel> datagramChannelFactory
);
}

View File

@@ -0,0 +1,14 @@
package net.woggioni.rbcs.api;
import lombok.Getter;
import lombok.RequiredArgsConstructor;
import java.io.Serializable;
@Getter
@RequiredArgsConstructor
public class CacheValueMetadata implements Serializable {
private final String contentDisposition;
private final String mimeType;
}

View File

@@ -35,8 +35,6 @@ public class Configuration {
@Value
public static class Connection {
Duration readTimeout;
Duration writeTimeout;
Duration idleTimeout;
Duration readIdleTimeout;
Duration writeIdleTimeout;
@@ -85,17 +83,6 @@ public class Configuration {
Group extract(X509Certificate cert);
}
@Value
public static class Throttling {
KeyStore keyStore;
TrustStore trustStore;
boolean verifyClients;
}
public enum ClientCertificate {
REQUIRED, OPTIONAL
}
@Value
public static class Tls {
KeyStore keyStore;
@@ -135,7 +122,7 @@ public class Configuration {
}
public interface Cache {
net.woggioni.rbcs.api.Cache materialize();
CacheHandlerFactory materialize();
String getNamespaceURI();
String getTypeName();
}

View File

@@ -0,0 +1,161 @@
package net.woggioni.rbcs.api.message;
import io.netty.buffer.ByteBuf;
import io.netty.buffer.ByteBufHolder;
import lombok.Getter;
import lombok.RequiredArgsConstructor;
import net.woggioni.rbcs.api.CacheValueMetadata;
public sealed interface CacheMessage {
@Getter
@RequiredArgsConstructor
final class CacheGetRequest implements CacheMessage {
private final String key;
}
abstract sealed class CacheGetResponse implements CacheMessage {
}
@Getter
@RequiredArgsConstructor
final class CacheValueFoundResponse extends CacheGetResponse {
private final String key;
private final CacheValueMetadata metadata;
}
final class CacheValueNotFoundResponse extends CacheGetResponse {
}
@Getter
@RequiredArgsConstructor
final class CachePutRequest implements CacheMessage {
private final String key;
private final CacheValueMetadata metadata;
}
@Getter
@RequiredArgsConstructor
final class CachePutResponse implements CacheMessage {
private final String key;
}
@RequiredArgsConstructor
non-sealed class CacheContent implements CacheMessage, ByteBufHolder {
protected final ByteBuf chunk;
@Override
public ByteBuf content() {
return chunk;
}
@Override
public CacheContent copy() {
return replace(chunk.copy());
}
@Override
public CacheContent duplicate() {
return new CacheContent(chunk.duplicate());
}
@Override
public CacheContent retainedDuplicate() {
return new CacheContent(chunk.retainedDuplicate());
}
@Override
public CacheContent replace(ByteBuf content) {
return new CacheContent(content);
}
@Override
public CacheContent retain() {
chunk.retain();
return this;
}
@Override
public CacheContent retain(int increment) {
chunk.retain(increment);
return this;
}
@Override
public CacheContent touch() {
chunk.touch();
return this;
}
@Override
public CacheContent touch(Object hint) {
chunk.touch(hint);
return this;
}
@Override
public int refCnt() {
return chunk.refCnt();
}
@Override
public boolean release() {
return chunk.release();
}
@Override
public boolean release(int decrement) {
return chunk.release(decrement);
}
}
final class LastCacheContent extends CacheContent {
public LastCacheContent(ByteBuf chunk) {
super(chunk);
}
@Override
public LastCacheContent copy() {
return replace(chunk.copy());
}
@Override
public LastCacheContent duplicate() {
return new LastCacheContent(chunk.duplicate());
}
@Override
public LastCacheContent retainedDuplicate() {
return new LastCacheContent(chunk.retainedDuplicate());
}
@Override
public LastCacheContent replace(ByteBuf content) {
return new LastCacheContent(chunk);
}
@Override
public LastCacheContent retain() {
super.retain();
return this;
}
@Override
public LastCacheContent retain(int increment) {
super.retain(increment);
return this;
}
@Override
public LastCacheContent touch() {
super.touch();
return this;
}
@Override
public LastCacheContent touch(Object hint) {
super.touch(hint);
return this;
}
}
}

View File

@@ -9,20 +9,22 @@ plugins {
id 'maven-publish'
}
import net.woggioni.gradle.envelope.EnvelopePlugin
import net.woggioni.gradle.envelope.EnvelopeJarTask
import net.woggioni.gradle.graalvm.NativeImageConfigurationTask
import net.woggioni.gradle.graalvm.NativeImagePlugin
import net.woggioni.gradle.graalvm.NativeImageTask
import net.woggioni.gradle.graalvm.UpxTask
import net.woggioni.gradle.graalvm.JlinkPlugin
import net.woggioni.gradle.graalvm.JlinkTask
Property<String> mainModuleName = objects.property(String.class)
mainModuleName.set('net.woggioni.rbcs.cli')
Property<String> mainClassName = objects.property(String.class)
mainClassName.set('net.woggioni.rbcs.cli.RemoteBuildCacheServerCli')
sourceSets {
configureNativeImage {
java {
}
kotlin {
tasks.named(JavaPlugin.COMPILE_JAVA_TASK_NAME, JavaCompile) {
options.javaModuleMainClass = mainClassName
}
}
}
configurations {
@@ -32,16 +34,25 @@ configurations {
canBeResolved = true
visible = true
}
}
envelopeJar {
mainModule = mainModuleName
mainClass = mainClassName
configureNativeImageImplementation {
extendsFrom implementation
}
configureNativeImageRuntimeOnly {
extendsFrom runtimeOnly
}
nativeImage {
extendsFrom runtimeClasspath
}
extraClasspath = ["plugins"]
}
dependencies {
configureNativeImageImplementation project
configureNativeImageImplementation project(':rbcs-server-memcache')
implementation catalog.jwo
implementation catalog.slf4j.api
implementation catalog.picocli
@@ -52,32 +63,55 @@ dependencies {
// runtimeOnly catalog.slf4j.jdk14
runtimeOnly catalog.logback.classic
// runtimeOnly catalog.slf4j.simple
nativeImage project(':rbcs-server-memcache')
}
Provider<EnvelopeJarTask> envelopeJarTaskProvider = tasks.named('envelopeJar', EnvelopeJarTask.class) {
// systemProperties['java.util.logging.config.class'] = 'net.woggioni.rbcs.LoggingConfig'
// systemProperties['log.config.source'] = 'net/woggioni/rbcs/cli/logging.properties'
// systemProperties['java.util.logging.config.file'] = 'classpath:net/woggioni/rbcs/cli/logging.properties'
Property<String> mainModuleName = objects.property(String.class)
mainModuleName.set('net.woggioni.rbcs.cli')
Property<String> mainClassName = objects.property(String.class)
mainClassName.set('net.woggioni.rbcs.cli.RemoteBuildCacheServerCli')
tasks.named(JavaPlugin.COMPILE_JAVA_TASK_NAME, JavaCompile) {
options.javaModuleMainClass = mainClassName
}
Provider<Jar> jarTaskProvider = tasks.named(JavaPlugin.JAR_TASK_NAME, Jar)
Provider<EnvelopeJarTask> envelopeJarTaskProvider = tasks.named(EnvelopePlugin.ENVELOPE_JAR_TASK_NAME, EnvelopeJarTask.class) {
mainModule = mainModuleName
mainClass = mainClassName
extraClasspath = ["plugins"]
systemProperties['logback.configurationFile'] = 'classpath:net/woggioni/rbcs/cli/logback.xml'
systemProperties['io.netty.leakDetectionLevel'] = 'DISABLED'
// systemProperties['org.slf4j.simpleLogger.showDateTime'] = 'true'
// systemProperties['org.slf4j.simpleLogger.defaultLogLevel'] = 'debug'
// systemProperties['org.slf4j.simpleLogger.log.com.google.code.yanf4j'] = 'warn'
// systemProperties['org.slf4j.simpleLogger.log.net.rubyeye.xmemcached'] = 'warn'
// systemProperties['org.slf4j.simpleLogger.dateTimeFormat'] = 'yyyy-MM-dd\'T\'HH:mm:ss.SSSZ'
}
tasks.named(NativeImagePlugin.CONFIGURE_NATIVE_IMAGE_TASK_NAME, NativeImageConfigurationTask) {
mainClass = mainClassName
mainModule = mainModuleName
mainClass = "net.woggioni.rbcs.cli.graal.GraalNativeImageConfiguration"
setClasspath(configurations.configureNativeImageRuntimeClasspath + sourceSets.graal.output.classesDirs)
mergeConfiguration = false
systemProperty('logback.configurationFile', 'classpath:net/woggioni/rbcs/cli/logback.xml')
systemProperty('io.netty.leakDetectionLevel', 'DISABLED')
modularity.inferModulePath = false
enabled = false
}
tasks.named(NativeImagePlugin.NATIVE_IMAGE_TASK_NAME, NativeImageTask) {
nativeImage {
mainClass = mainClassName
mainModule = mainModuleName
// mainModule = mainModuleName
useMusl = true
buildStaticImage = true
linkAtBuildTime = false
classpath = project.files(jarTaskProvider, configurations.nativeImage)
compressExecutable = true
compressionLevel = 10
useLZMA = false
}
Provider<UpxTask> upxTaskProvider = tasks.named(NativeImagePlugin.UPX_TASK_NAME, UpxTask) {
}
tasks.named(JlinkPlugin.JLINK_TASK_NAME, JlinkTask) {
@@ -85,16 +119,30 @@ tasks.named(JlinkPlugin.JLINK_TASK_NAME, JlinkTask) {
mainModule = 'net.woggioni.rbcs.cli'
}
tasks.named(JavaPlugin.PROCESS_RESOURCES_TASK_NAME, ProcessResources) {
from(rootProject.file('conf')) {
into('net/woggioni/rbcs/cli')
include 'logback.xml'
include 'logging.properties'
}
}
artifacts {
release(envelopeJarTaskProvider)
release(upxTaskProvider)
}
publishing {
publications {
maven(MavenPublication) {
artifact envelopeJar
artifact(upxTaskProvider) {
classifier = "linux-x86_64"
extension = "exe"
}
}
}
}

View File

@@ -0,0 +1,6 @@
[
{
"name":"java.lang.Boolean",
"methods":[{"name":"getBoolean","parameterTypes":["java.lang.String"] }]
}
]

View File

@@ -1,2 +1,2 @@
Args=-H:Optimize=3 --gc=serial --initialize-at-run-time=io.netty
Args=-O3 --gc=serial --install-exit-handlers --initialize-at-run-time=io.netty --enable-url-protocols=jpms --initialize-at-build-time=net.woggioni.rbcs.common.RbcsUrlStreamHandlerFactory,net.woggioni.rbcs.common.RbcsUrlStreamHandlerFactory$JpmsHandler
#-H:TraceClassInitialization=io.netty.handler.ssl.BouncyCastleAlpnSslUtils

View File

@@ -0,0 +1,8 @@
[
{
"type":"agent-extracted",
"classes":[
]
}
]

View File

@@ -0,0 +1,2 @@
[
]

View File

@@ -0,0 +1,756 @@
[
{
"name":"android.os.Build$VERSION"
},
{
"name":"ch.qos.logback.classic.encoder.PatternLayoutEncoder",
"queryAllPublicMethods":true,
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"ch.qos.logback.classic.joran.SerializedModelConfigurator",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"ch.qos.logback.classic.util.DefaultJoranConfigurator",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"ch.qos.logback.core.ConsoleAppender",
"queryAllPublicMethods":true,
"methods":[{"name":"<init>","parameterTypes":[] }, {"name":"setTarget","parameterTypes":["java.lang.String"] }]
},
{
"name":"ch.qos.logback.core.OutputStreamAppender",
"methods":[{"name":"setEncoder","parameterTypes":["ch.qos.logback.core.encoder.Encoder"] }]
},
{
"name":"ch.qos.logback.core.encoder.Encoder",
"methods":[{"name":"valueOf","parameterTypes":["java.lang.String"] }]
},
{
"name":"ch.qos.logback.core.encoder.LayoutWrappingEncoder",
"methods":[{"name":"setParent","parameterTypes":["ch.qos.logback.core.spi.ContextAware"] }]
},
{
"name":"ch.qos.logback.core.pattern.PatternLayoutEncoderBase",
"methods":[{"name":"setPattern","parameterTypes":["java.lang.String"] }]
},
{
"name":"ch.qos.logback.core.spi.ContextAware",
"methods":[{"name":"valueOf","parameterTypes":["java.lang.String"] }]
},
{
"name":"com.aayushatharva.brotli4j.Brotli4jLoader"
},
{
"name":"com.github.luben.zstd.Zstd"
},
{
"name":"com.sun.crypto.provider.AESCipher$General",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"com.sun.crypto.provider.ARCFOURCipher",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"com.sun.crypto.provider.ChaCha20Cipher$ChaCha20Poly1305",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"com.sun.crypto.provider.DESCipher",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"com.sun.crypto.provider.DESedeCipher",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"com.sun.crypto.provider.DHParameters",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"com.sun.crypto.provider.GaloisCounterMode$AESGCM",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"com.sun.crypto.provider.HmacCore$HmacSHA512",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"com.sun.crypto.provider.PBKDF2Core$HmacSHA512",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"com.sun.crypto.provider.TlsMasterSecretGenerator",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"com.sun.org.apache.xerces.internal.impl.dv.xs.ExtendedSchemaDVFactoryImpl",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"com.sun.org.apache.xerces.internal.impl.dv.xs.SchemaDVFactoryImpl",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"com.sun.org.apache.xerces.internal.jaxp.SAXParserFactoryImpl",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"groovy.lang.Closure"
},
{
"name":"io.netty.bootstrap.ServerBootstrap$1"
},
{
"name":"io.netty.bootstrap.ServerBootstrap$ServerBootstrapAcceptor",
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
},
{
"name":"io.netty.buffer.AbstractByteBufAllocator",
"queryAllDeclaredMethods":true
},
{
"name":"io.netty.buffer.AbstractReferenceCountedByteBuf",
"fields":[{"name":"refCnt"}]
},
{
"name":"io.netty.channel.AbstractChannelHandlerContext",
"fields":[{"name":"handlerState"}]
},
{
"name":"io.netty.channel.ChannelDuplexHandler",
"methods":[{"name":"bind","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.net.SocketAddress","io.netty.channel.ChannelPromise"] }, {"name":"close","parameterTypes":["io.netty.channel.ChannelHandlerContext","io.netty.channel.ChannelPromise"] }, {"name":"connect","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.net.SocketAddress","java.net.SocketAddress","io.netty.channel.ChannelPromise"] }, {"name":"deregister","parameterTypes":["io.netty.channel.ChannelHandlerContext","io.netty.channel.ChannelPromise"] }, {"name":"disconnect","parameterTypes":["io.netty.channel.ChannelHandlerContext","io.netty.channel.ChannelPromise"] }, {"name":"flush","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"read","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"write","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object","io.netty.channel.ChannelPromise"] }]
},
{
"name":"io.netty.channel.ChannelHandlerAdapter",
"methods":[{"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
},
{
"name":"io.netty.channel.ChannelInboundHandlerAdapter",
"methods":[{"name":"channelActive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelInactive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }, {"name":"channelReadComplete","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelRegistered","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelUnregistered","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelWritabilityChanged","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }, {"name":"userEventTriggered","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
},
{
"name":"io.netty.channel.ChannelInitializer",
"methods":[{"name":"channelRegistered","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
},
{
"name":"io.netty.channel.ChannelOutboundBuffer",
"fields":[{"name":"totalPendingSize"}, {"name":"unwritable"}]
},
{
"name":"io.netty.channel.ChannelOutboundHandlerAdapter",
"methods":[{"name":"bind","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.net.SocketAddress","io.netty.channel.ChannelPromise"] }, {"name":"connect","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.net.SocketAddress","java.net.SocketAddress","io.netty.channel.ChannelPromise"] }, {"name":"deregister","parameterTypes":["io.netty.channel.ChannelHandlerContext","io.netty.channel.ChannelPromise"] }, {"name":"disconnect","parameterTypes":["io.netty.channel.ChannelHandlerContext","io.netty.channel.ChannelPromise"] }, {"name":"flush","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"read","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }]
},
{
"name":"io.netty.channel.CombinedChannelDuplexHandler",
"methods":[{"name":"bind","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.net.SocketAddress","io.netty.channel.ChannelPromise"] }, {"name":"channelActive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelInactive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }, {"name":"channelReadComplete","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelRegistered","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelUnregistered","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelWritabilityChanged","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"close","parameterTypes":["io.netty.channel.ChannelHandlerContext","io.netty.channel.ChannelPromise"] }, {"name":"connect","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.net.SocketAddress","java.net.SocketAddress","io.netty.channel.ChannelPromise"] }, {"name":"deregister","parameterTypes":["io.netty.channel.ChannelHandlerContext","io.netty.channel.ChannelPromise"] }, {"name":"disconnect","parameterTypes":["io.netty.channel.ChannelHandlerContext","io.netty.channel.ChannelPromise"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }, {"name":"flush","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"read","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"userEventTriggered","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }, {"name":"write","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object","io.netty.channel.ChannelPromise"] }]
},
{
"name":"io.netty.channel.DefaultChannelConfig",
"fields":[{"name":"autoRead"}, {"name":"writeBufferWaterMark"}]
},
{
"name":"io.netty.channel.DefaultChannelPipeline",
"fields":[{"name":"estimatorHandle"}]
},
{
"name":"io.netty.channel.DefaultChannelPipeline$HeadContext",
"methods":[{"name":"bind","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.net.SocketAddress","io.netty.channel.ChannelPromise"] }, {"name":"channelActive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelInactive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }, {"name":"channelReadComplete","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelRegistered","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelUnregistered","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelWritabilityChanged","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"close","parameterTypes":["io.netty.channel.ChannelHandlerContext","io.netty.channel.ChannelPromise"] }, {"name":"connect","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.net.SocketAddress","java.net.SocketAddress","io.netty.channel.ChannelPromise"] }, {"name":"deregister","parameterTypes":["io.netty.channel.ChannelHandlerContext","io.netty.channel.ChannelPromise"] }, {"name":"disconnect","parameterTypes":["io.netty.channel.ChannelHandlerContext","io.netty.channel.ChannelPromise"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }, {"name":"flush","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"read","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"userEventTriggered","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }, {"name":"write","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object","io.netty.channel.ChannelPromise"] }]
},
{
"name":"io.netty.channel.DefaultChannelPipeline$TailContext",
"methods":[{"name":"channelActive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelInactive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }, {"name":"channelReadComplete","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelRegistered","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelUnregistered","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelWritabilityChanged","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }, {"name":"userEventTriggered","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
},
{
"name":"io.netty.channel.SimpleChannelInboundHandler",
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
},
{
"name":"io.netty.channel.embedded.EmbeddedChannel$2"
},
{
"name":"io.netty.channel.pool.SimpleChannelPool$1"
},
{
"name":"io.netty.channel.socket.nio.NioSocketChannel",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"io.netty.handler.codec.ByteToMessageDecoder",
"methods":[{"name":"channelInactive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }, {"name":"channelReadComplete","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"userEventTriggered","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
},
{
"name":"io.netty.handler.codec.MessageAggregator",
"methods":[{"name":"channelInactive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelReadComplete","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }]
},
{
"name":"io.netty.handler.codec.MessageToByteEncoder",
"methods":[{"name":"write","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object","io.netty.channel.ChannelPromise"] }]
},
{
"name":"io.netty.handler.codec.MessageToMessageCodec",
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }, {"name":"channelReadComplete","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }]
},
{
"name":"io.netty.handler.codec.MessageToMessageDecoder",
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
},
{
"name":"io.netty.handler.codec.compression.JdkZlibDecoder"
},
{
"name":"io.netty.handler.codec.compression.JdkZlibEncoder",
"methods":[{"name":"close","parameterTypes":["io.netty.channel.ChannelHandlerContext","io.netty.channel.ChannelPromise"] }]
},
{
"name":"io.netty.handler.codec.http.HttpClientCodec"
},
{
"name":"io.netty.handler.codec.http.HttpContentDecoder",
"methods":[{"name":"channelInactive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelReadComplete","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }]
},
{
"name":"io.netty.handler.codec.http.HttpContentDecompressor"
},
{
"name":"io.netty.handler.codec.http.HttpContentEncoder",
"methods":[{"name":"channelInactive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }]
},
{
"name":"io.netty.handler.codec.http.HttpObjectAggregator"
},
{
"name":"io.netty.handler.codec.http.HttpServerCodec"
},
{
"name":"io.netty.handler.codec.memcache.binary.BinaryMemcacheClientCodec"
},
{
"name":"io.netty.handler.stream.ChunkedWriteHandler",
"methods":[{"name":"channelInactive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelWritabilityChanged","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"flush","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"write","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object","io.netty.channel.ChannelPromise"] }]
},
{
"name":"io.netty.handler.timeout.IdleStateHandler",
"methods":[{"name":"channelActive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelInactive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }, {"name":"channelReadComplete","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"channelRegistered","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"write","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object","io.netty.channel.ChannelPromise"] }]
},
{
"name":"io.netty.internal.tcnative.SSLContext"
},
{
"name":"io.netty.util.AbstractReferenceCounted",
"fields":[{"name":"refCnt"}]
},
{
"name":"io.netty.util.DefaultAttributeMap",
"fields":[{"name":"attributes"}]
},
{
"name":"io.netty.util.DefaultAttributeMap$DefaultAttribute",
"fields":[{"name":"attributeMap"}]
},
{
"name":"io.netty.util.Recycler$DefaultHandle",
"fields":[{"name":"state"}]
},
{
"name":"io.netty.util.ReferenceCountUtil",
"queryAllDeclaredMethods":true
},
{
"name":"io.netty.util.concurrent.DefaultPromise",
"fields":[{"name":"result"}]
},
{
"name":"io.netty.util.concurrent.SingleThreadEventExecutor",
"fields":[{"name":"state"}, {"name":"threadProperties"}]
},
{
"name":"io.netty.util.internal.shaded.org.jctools.queues.BaseMpscLinkedArrayQueueColdProducerFields",
"fields":[{"name":"producerLimit"}]
},
{
"name":"io.netty.util.internal.shaded.org.jctools.queues.BaseMpscLinkedArrayQueueConsumerFields",
"fields":[{"name":"consumerIndex"}]
},
{
"name":"io.netty.util.internal.shaded.org.jctools.queues.BaseMpscLinkedArrayQueueProducerFields",
"fields":[{"name":"producerIndex"}]
},
{
"name":"io.netty.util.internal.shaded.org.jctools.queues.unpadded.MpscUnpaddedArrayQueueConsumerIndexField",
"fields":[{"name":"consumerIndex"}]
},
{
"name":"io.netty.util.internal.shaded.org.jctools.queues.unpadded.MpscUnpaddedArrayQueueProducerIndexField",
"fields":[{"name":"producerIndex"}]
},
{
"name":"io.netty.util.internal.shaded.org.jctools.queues.unpadded.MpscUnpaddedArrayQueueProducerLimitField",
"fields":[{"name":"producerLimit"}]
},
{
"name":"java.io.FilePermission"
},
{
"name":"java.lang.Object",
"allDeclaredFields":true,
"queryAllDeclaredMethods":true
},
{
"name":"java.lang.ProcessHandle",
"methods":[{"name":"current","parameterTypes":[] }, {"name":"pid","parameterTypes":[] }]
},
{
"name":"java.lang.RuntimePermission"
},
{
"name":"java.lang.System",
"methods":[{"name":"console","parameterTypes":[] }]
},
{
"name":"java.lang.Thread",
"fields":[{"name":"threadLocalRandomProbe"}]
},
{
"name":"java.net.NetPermission"
},
{
"name":"java.net.SocketPermission"
},
{
"name":"java.net.URLPermission",
"methods":[{"name":"<init>","parameterTypes":["java.lang.String","java.lang.String"] }]
},
{
"name":"java.nio.Bits",
"fields":[{"name":"MAX_MEMORY"}, {"name":"UNALIGNED"}]
},
{
"name":"java.nio.Buffer",
"fields":[{"name":"address"}]
},
{
"name":"java.nio.ByteBuffer",
"methods":[{"name":"alignedSlice","parameterTypes":["int"] }]
},
{
"name":"java.nio.DirectByteBuffer",
"methods":[{"name":"<init>","parameterTypes":["long","long"] }]
},
{
"name":"java.nio.channels.spi.SelectorProvider",
"methods":[{"name":"openServerSocketChannel","parameterTypes":["java.net.ProtocolFamily"] }, {"name":"openSocketChannel","parameterTypes":["java.net.ProtocolFamily"] }]
},
{
"name":"java.nio.file.Path"
},
{
"name":"java.nio.file.Paths",
"methods":[{"name":"get","parameterTypes":["java.lang.String","java.lang.String[]"] }]
},
{
"name":"java.security.AlgorithmParametersSpi"
},
{
"name":"java.security.AllPermission"
},
{
"name":"java.security.KeyStoreSpi"
},
{
"name":"java.security.SecureRandomParameters"
},
{
"name":"java.security.SecurityPermission"
},
{
"name":"java.sql.Connection"
},
{
"name":"java.sql.Driver"
},
{
"name":"java.sql.DriverManager",
"methods":[{"name":"getConnection","parameterTypes":["java.lang.String"] }, {"name":"getDriver","parameterTypes":["java.lang.String"] }]
},
{
"name":"java.sql.Time",
"methods":[{"name":"<init>","parameterTypes":["long"] }]
},
{
"name":"java.sql.Timestamp",
"methods":[{"name":"valueOf","parameterTypes":["java.lang.String"] }]
},
{
"name":"java.time.Duration",
"methods":[{"name":"parse","parameterTypes":["java.lang.CharSequence"] }]
},
{
"name":"java.time.Instant",
"methods":[{"name":"parse","parameterTypes":["java.lang.CharSequence"] }]
},
{
"name":"java.time.LocalDate",
"methods":[{"name":"parse","parameterTypes":["java.lang.CharSequence"] }]
},
{
"name":"java.time.LocalDateTime",
"methods":[{"name":"parse","parameterTypes":["java.lang.CharSequence"] }]
},
{
"name":"java.time.LocalTime",
"methods":[{"name":"parse","parameterTypes":["java.lang.CharSequence"] }]
},
{
"name":"java.time.MonthDay",
"methods":[{"name":"parse","parameterTypes":["java.lang.CharSequence"] }]
},
{
"name":"java.time.OffsetDateTime",
"methods":[{"name":"parse","parameterTypes":["java.lang.CharSequence"] }]
},
{
"name":"java.time.OffsetTime",
"methods":[{"name":"parse","parameterTypes":["java.lang.CharSequence"] }]
},
{
"name":"java.time.Period",
"methods":[{"name":"parse","parameterTypes":["java.lang.CharSequence"] }]
},
{
"name":"java.time.Year",
"methods":[{"name":"parse","parameterTypes":["java.lang.CharSequence"] }]
},
{
"name":"java.time.YearMonth",
"methods":[{"name":"parse","parameterTypes":["java.lang.CharSequence"] }]
},
{
"name":"java.time.ZoneId",
"methods":[{"name":"of","parameterTypes":["java.lang.String"] }]
},
{
"name":"java.time.ZoneOffset",
"methods":[{"name":"of","parameterTypes":["java.lang.String"] }]
},
{
"name":"java.time.ZonedDateTime",
"methods":[{"name":"parse","parameterTypes":["java.lang.CharSequence"] }]
},
{
"name":"java.util.PropertyPermission"
},
{
"name":"java.util.concurrent.ForkJoinTask",
"fields":[{"name":"aux"}, {"name":"status"}]
},
{
"name":"java.util.concurrent.atomic.AtomicBoolean",
"fields":[{"name":"value"}]
},
{
"name":"java.util.concurrent.atomic.AtomicReference",
"fields":[{"name":"value"}]
},
{
"name":"java.util.concurrent.atomic.Striped64",
"fields":[{"name":"base"}, {"name":"cellsBusy"}]
},
{
"name":"java.util.concurrent.atomic.Striped64$Cell",
"fields":[{"name":"value"}]
},
{
"name":"java.util.zip.Adler32",
"methods":[{"name":"update","parameterTypes":["java.nio.ByteBuffer"] }]
},
{
"name":"java.util.zip.CRC32",
"methods":[{"name":"update","parameterTypes":["java.nio.ByteBuffer"] }]
},
{
"name":"javax.security.auth.x500.X500Principal",
"fields":[{"name":"thisX500Name"}],
"methods":[{"name":"<init>","parameterTypes":["sun.security.x509.X500Name"] }]
},
{
"name":"javax.smartcardio.CardPermission"
},
{
"name":"jdk.internal.misc.Unsafe",
"methods":[{"name":"getUnsafe","parameterTypes":[] }]
},
{
"name":"net.woggioni.rbcs.cli.RemoteBuildCacheServerCli",
"allDeclaredFields":true,
"queryAllDeclaredMethods":true
},
{
"name":"net.woggioni.rbcs.cli.RemoteBuildCacheServerCli$VersionProvider",
"allDeclaredFields":true,
"queryAllDeclaredMethods":true,
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"net.woggioni.rbcs.cli.impl.RbcsCommand",
"allDeclaredFields":true,
"queryAllDeclaredMethods":true
},
{
"name":"net.woggioni.rbcs.cli.impl.commands.BenchmarkCommand",
"allDeclaredFields":true,
"queryAllDeclaredMethods":true
},
{
"name":"net.woggioni.rbcs.cli.impl.commands.ClientCommand",
"allDeclaredFields":true,
"queryAllDeclaredMethods":true
},
{
"name":"net.woggioni.rbcs.cli.impl.commands.GetCommand",
"allDeclaredFields":true,
"queryAllDeclaredMethods":true
},
{
"name":"net.woggioni.rbcs.cli.impl.commands.HealthCheckCommand",
"allDeclaredFields":true,
"queryAllDeclaredMethods":true
},
{
"name":"net.woggioni.rbcs.cli.impl.commands.PasswordHashCommand",
"allDeclaredFields":true,
"queryAllDeclaredMethods":true
},
{
"name":"net.woggioni.rbcs.cli.impl.commands.PutCommand",
"allDeclaredFields":true,
"queryAllDeclaredMethods":true
},
{
"name":"net.woggioni.rbcs.cli.impl.commands.ServerCommand",
"allDeclaredFields":true,
"queryAllDeclaredMethods":true
},
{
"name":"net.woggioni.rbcs.cli.impl.converters.ByteSizeConverter",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"net.woggioni.rbcs.cli.impl.converters.DurationConverter",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"net.woggioni.rbcs.cli.impl.converters.OutputStreamConverter",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"net.woggioni.rbcs.client.RemoteBuildCacheClient$sendRequest$1$operationComplete$responseHandler$1",
"methods":[{"name":"channelInactive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
},
{
"name":"net.woggioni.rbcs.client.RemoteBuildCacheClient$sendRequest$1$operationComplete$timeoutHandler$1",
"methods":[{"name":"userEventTriggered","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
},
{
"name":"net.woggioni.rbcs.server.RemoteBuildCacheServer$HttpChunkContentCompressor",
"methods":[{"name":"write","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object","io.netty.channel.ChannelPromise"] }]
},
{
"name":"net.woggioni.rbcs.server.RemoteBuildCacheServer$NettyHttpBasicAuthenticator"
},
{
"name":"net.woggioni.rbcs.server.RemoteBuildCacheServer$ServerInitializer"
},
{
"name":"net.woggioni.rbcs.server.RemoteBuildCacheServer$ServerInitializer$initChannel$4",
"methods":[{"name":"userEventTriggered","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
},
{
"name":"net.woggioni.rbcs.server.auth.AbstractNettyHttpAuthenticator",
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
},
{
"name":"net.woggioni.rbcs.server.cache.FileSystemCacheHandler",
"methods":[{"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
},
{
"name":"net.woggioni.rbcs.server.cache.InMemoryCacheHandler",
"methods":[{"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
},
{
"name":"net.woggioni.rbcs.server.exception.ExceptionHandler",
"methods":[{"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
},
{
"name":"net.woggioni.rbcs.server.handler.CacheContentHandler",
"methods":[{"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
},
{
"name":"net.woggioni.rbcs.server.handler.MaxRequestSizeHandler",
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
},
{
"name":"net.woggioni.rbcs.server.handler.ServerHandler",
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }, {"name":"write","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object","io.netty.channel.ChannelPromise"] }]
},
{
"name":"net.woggioni.rbcs.server.handler.TraceHandler",
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
},
{
"name":"net.woggioni.rbcs.server.memcache.MemcacheCacheHandler",
"methods":[{"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
},
{
"name":"net.woggioni.rbcs.server.memcache.client.MemcacheClient$sendRequest$1$operationComplete$handler$1",
"methods":[{"name":"channelInactive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
},
{
"name":"net.woggioni.rbcs.server.throttling.ThrottlingHandler",
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
},
{
"name":"sun.misc.Unsafe",
"fields":[{"name":"theUnsafe"}],
"methods":[{"name":"copyMemory","parameterTypes":["java.lang.Object","long","java.lang.Object","long","long"] }, {"name":"getAndAddLong","parameterTypes":["java.lang.Object","long","long"] }, {"name":"getAndSetObject","parameterTypes":["java.lang.Object","long","java.lang.Object"] }, {"name":"invokeCleaner","parameterTypes":["java.nio.ByteBuffer"] }, {"name":"storeFence","parameterTypes":[] }]
},
{
"name":"sun.nio.ch.SelectorImpl",
"fields":[{"name":"publicSelectedKeys"}, {"name":"selectedKeys"}]
},
{
"name":"sun.security.pkcs12.PKCS12KeyStore",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.pkcs12.PKCS12KeyStore$DualFormatPKCS12",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.provider.DSA$SHA224withDSA",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.provider.DSA$SHA256withDSA",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.provider.JavaKeyStore$JKS",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.provider.MD5",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.provider.NativePRNG",
"methods":[{"name":"<init>","parameterTypes":[] }, {"name":"<init>","parameterTypes":["java.security.SecureRandomParameters"] }]
},
{
"name":"sun.security.provider.NativePRNG$NonBlocking",
"methods":[{"name":"<init>","parameterTypes":[] }, {"name":"<init>","parameterTypes":["java.security.SecureRandomParameters"] }]
},
{
"name":"sun.security.provider.SHA",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.provider.SHA2$SHA224",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.provider.SHA2$SHA256",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.provider.SHA5$SHA384",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.provider.SHA5$SHA512",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.provider.X509Factory",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.rsa.PSSParameters",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.rsa.RSAKeyFactory$Legacy",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.rsa.RSAPSSSignature",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.rsa.RSASignature$SHA224withRSA",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.ssl.KeyManagerFactoryImpl$SunX509",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.ssl.SSLContextImpl$DefaultSSLContext",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.ssl.SSLContextImpl$TLSContext",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.ssl.TrustManagerFactoryImpl$PKIXFactory",
"methods":[{"name":"<init>","parameterTypes":[] }]
},
{
"name":"sun.security.x509.AuthorityInfoAccessExtension",
"methods":[{"name":"<init>","parameterTypes":["java.lang.Boolean","java.lang.Object"] }]
},
{
"name":"sun.security.x509.AuthorityKeyIdentifierExtension",
"methods":[{"name":"<init>","parameterTypes":["java.lang.Boolean","java.lang.Object"] }]
},
{
"name":"sun.security.x509.BasicConstraintsExtension",
"methods":[{"name":"<init>","parameterTypes":["java.lang.Boolean","java.lang.Object"] }]
},
{
"name":"sun.security.x509.CRLDistributionPointsExtension",
"methods":[{"name":"<init>","parameterTypes":["java.lang.Boolean","java.lang.Object"] }]
},
{
"name":"sun.security.x509.CertificatePoliciesExtension",
"methods":[{"name":"<init>","parameterTypes":["java.lang.Boolean","java.lang.Object"] }]
},
{
"name":"sun.security.x509.KeyUsageExtension",
"methods":[{"name":"<init>","parameterTypes":["java.lang.Boolean","java.lang.Object"] }]
},
{
"name":"sun.security.x509.NetscapeCertTypeExtension",
"methods":[{"name":"<init>","parameterTypes":["java.lang.Boolean","java.lang.Object"] }]
},
{
"name":"sun.security.x509.PrivateKeyUsageExtension",
"methods":[{"name":"<init>","parameterTypes":["java.lang.Boolean","java.lang.Object"] }]
},
{
"name":"sun.security.x509.SubjectAlternativeNameExtension",
"methods":[{"name":"<init>","parameterTypes":["java.lang.Boolean","java.lang.Object"] }]
},
{
"name":"sun.security.x509.SubjectKeyIdentifierExtension",
"methods":[{"name":"<init>","parameterTypes":["java.lang.Boolean","java.lang.Object"] }]
}
]

View File

@@ -0,0 +1,74 @@
{
"resources": {
"includes": [
{
"pattern": "\\QMETA-INF/MANIFEST.MF\\E"
},
{
"pattern": "\\QMETA-INF/services/ch.qos.logback.classic.spi.Configurator\\E"
},
{
"pattern": "\\QMETA-INF/services/java.lang.System$LoggerFinder\\E"
},
{
"pattern": "\\QMETA-INF/services/java.net.spi.InetAddressResolverProvider\\E"
},
{
"pattern": "\\QMETA-INF/services/java.net.spi.URLStreamHandlerProvider\\E"
},
{
"pattern": "\\QMETA-INF/services/java.nio.channels.spi.SelectorProvider\\E"
},
{
"pattern": "\\QMETA-INF/services/java.time.zone.ZoneRulesProvider\\E"
},
{
"pattern": "\\QMETA-INF/services/javax.xml.parsers.DocumentBuilderFactory\\E"
},
{
"pattern": "\\QMETA-INF/services/javax.xml.parsers.SAXParserFactory\\E"
},
{
"pattern": "\\QMETA-INF/services/net.woggioni.rbcs.api.CacheProvider\\E"
},
{
"pattern": "\\QMETA-INF/services/org.slf4j.spi.SLF4JServiceProvider\\E"
},
{
"pattern": "\\Qclasspath:net/woggioni/rbcs/cli/logback.xml\\E"
},
{
"pattern": "\\Qlogback-test.scmo\\E"
},
{
"pattern": "\\Qlogback.scmo\\E"
},
{
"pattern": "\\Qnet/woggioni/rbcs/cli/logback.xml\\E"
},
{
"pattern": "\\Qnet/woggioni/rbcs/server/rbcs-default.xml\\E"
},
{
"pattern": "\\Qnet/woggioni/rbcs/server/schema/rbcs.xsd\\E"
},
{
"pattern": "\\Qnet/woggioni/rbcs/client/schema/rbcs-client.xsd\\E"
},
{
"pattern": "\\Q/net/woggioni/rbcs/server/memcache/schema/rbcs-memcache.xsd\\E"
},
{
"pattern": "java.base:\\Qsun/text/resources/LineBreakIteratorData\\E"
}
]
},
"bundles": [
{
"name": "com.sun.org.apache.xerces.internal.impl.xpath.regex.message",
"locales": [
""
]
}
]
}

View File

@@ -0,0 +1,11 @@
{
"types":[
{
"name":"net.woggioni.rbcs.api.CacheValueMetadata"
}
],
"lambdaCapturingTypes":[
],
"proxies":[
]
}

View File

@@ -0,0 +1,161 @@
package net.woggioni.rbcs.cli.graal
import net.woggioni.rbcs.api.Configuration
import net.woggioni.rbcs.api.Configuration.User
import net.woggioni.rbcs.api.Role
import net.woggioni.rbcs.cli.RemoteBuildCacheServerCli
import net.woggioni.rbcs.cli.impl.commands.BenchmarkCommand
import net.woggioni.rbcs.cli.impl.commands.HealthCheckCommand
import net.woggioni.rbcs.client.RemoteBuildCacheClient
import net.woggioni.rbcs.common.HostAndPort
import net.woggioni.rbcs.common.PasswordSecurity.hashPassword
import net.woggioni.rbcs.common.RBCS
import net.woggioni.rbcs.common.Xml
import net.woggioni.rbcs.server.RemoteBuildCacheServer
import net.woggioni.rbcs.server.cache.FileSystemCacheConfiguration
import net.woggioni.rbcs.server.cache.InMemoryCacheConfiguration
import net.woggioni.rbcs.server.configuration.Parser
import net.woggioni.rbcs.server.memcache.MemcacheCacheConfiguration
import java.net.URI
import java.nio.file.Path
import java.time.Duration
import java.time.temporal.ChronoUnit
import java.util.concurrent.ExecutionException
import java.util.zip.Deflater
object GraalNativeImageConfiguration {
@JvmStatic
fun main(vararg args : String) {
val serverDoc = RemoteBuildCacheServer.DEFAULT_CONFIGURATION_URL.openStream().use {
Xml.parseXml(RemoteBuildCacheServer.DEFAULT_CONFIGURATION_URL, it)
}
Parser.parse(doc)
val clientDoc = RemoteBuildCacheClient.Configuration.openStream().use {
Xml.parseXml(RemoteBuildCacheServer.DEFAULT_CONFIGURATION_URL, it)
}
Parser.parse(doc)
val PASSWORD = "password"
val readersGroup = Configuration.Group("readers", setOf(Role.Reader), null, null)
val writersGroup = Configuration.Group("writers", setOf(Role.Writer), null, null)
val users = listOf(
User("user1", hashPassword(PASSWORD), setOf(readersGroup), null),
User("user2", hashPassword(PASSWORD), setOf(writersGroup), null),
User("user3", hashPassword(PASSWORD), setOf(readersGroup, writersGroup), null),
User("", null, setOf(readersGroup), null),
User("user4", hashPassword(PASSWORD), setOf(readersGroup),
Configuration.Quota(1, Duration.of(1, ChronoUnit.DAYS), 0, 1)
),
User("user5", hashPassword(PASSWORD), setOf(readersGroup),
Configuration.Quota(1, Duration.of(5, ChronoUnit.SECONDS), 0, 1)
)
)
val serverPort = RBCS.getFreePort()
val caches = listOf<Configuration.Cache>(
InMemoryCacheConfiguration(
maxAge = Duration.ofSeconds(3600),
digestAlgorithm = "MD5",
compressionLevel = Deflater.DEFAULT_COMPRESSION,
compressionEnabled = false,
maxSize = 0x1000000,
chunkSize = 0x1000
),
FileSystemCacheConfiguration(
Path.of(System.getProperty("java.io.tmpdir")).resolve("rbcs"),
maxAge = Duration.ofSeconds(3600),
digestAlgorithm = "MD5",
compressionLevel = Deflater.DEFAULT_COMPRESSION,
compressionEnabled = false,
chunkSize = 0x1000
),
MemcacheCacheConfiguration(
listOf(MemcacheCacheConfiguration.Server(
HostAndPort("127.0.0.1", 11211),
1000,
4)
),
Duration.ofSeconds(60),
"MD5",
null,
1,
0x1000
)
)
for (cache in caches) {
val serverConfiguration = Configuration(
"127.0.0.1",
serverPort,
100,
null,
Configuration.EventExecutor(true),
Configuration.Connection(
Duration.ofSeconds(10),
Duration.ofSeconds(15),
Duration.ofSeconds(15),
0x10000,
),
users.asSequence().map { it.name to it }.toMap(),
sequenceOf(writersGroup, readersGroup).map { it.name to it }.toMap(),
cache,
Configuration.BasicAuthentication(),
null,
)
MemcacheCacheConfiguration(
listOf(
MemcacheCacheConfiguration.Server(
HostAndPort("127.0.0.1", 11211),
1000,
4
)
),
Duration.ofSeconds(60),
"MD5",
null,
1,
0x1000
)
val serverHandle = RemoteBuildCacheServer(serverConfiguration).run()
val clientProfile = RemoteBuildCacheClient.Configuration.Profile(
URI.create("http://127.0.0.1:$serverPort/"),
null,
RemoteBuildCacheClient.Configuration.Authentication.BasicAuthenticationCredentials("user3", PASSWORD),
Duration.ofSeconds(3),
10,
true,
RemoteBuildCacheClient.Configuration.RetryPolicy(
3,
1000,
1.2
),
RemoteBuildCacheClient.Configuration.TrustStore(null, null, false, false)
)
HealthCheckCommand.run(clientProfile)
BenchmarkCommand.run(
clientProfile,
1000,
0x100,
true
)
serverHandle.sendShutdownSignal()
try {
serverHandle.get()
} catch (ee : ExecutionException) {
}
}
RemoteBuildCacheServerCli.main("--help")
}
}

View File

@@ -1,5 +1,6 @@
package net.woggioni.rbcs.cli
import net.woggioni.jwo.Application
import net.woggioni.rbcs.cli.impl.AbstractVersionProvider
import net.woggioni.rbcs.cli.impl.RbcsCommand
import net.woggioni.rbcs.cli.impl.commands.BenchmarkCommand
@@ -11,7 +12,6 @@ import net.woggioni.rbcs.cli.impl.commands.PutCommand
import net.woggioni.rbcs.cli.impl.commands.ServerCommand
import net.woggioni.rbcs.common.RbcsUrlStreamHandlerFactory
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.jwo.Application
import picocli.CommandLine
import picocli.CommandLine.Model.CommandSpec
@@ -23,8 +23,13 @@ class RemoteBuildCacheServerCli : RbcsCommand() {
class VersionProvider : AbstractVersionProvider()
companion object {
private fun setPropertyIfNotPresent(key: String, value: String) {
System.getProperty(key) ?: System.setProperty(key, value)
}
@JvmStatic
fun main(vararg args: String) {
setPropertyIfNotPresent("logback.configurationFile", "net/woggioni/rbcs/cli/logback.xml")
setPropertyIfNotPresent("io.netty.leakDetectionLevel", "DISABLED")
val currentClassLoader = RemoteBuildCacheServerCli::class.java.classLoader
Thread.currentThread().contextClassLoader = currentClassLoader
if(currentClassLoader.javaClass.name == "net.woggioni.envelope.loader.ModuleClassLoader") {

View File

@@ -1,15 +1,20 @@
package net.woggioni.rbcs.cli.impl.commands
import net.woggioni.jwo.JWO
import net.woggioni.jwo.LongMath
import net.woggioni.rbcs.api.CacheValueMetadata
import net.woggioni.rbcs.cli.impl.RbcsCommand
import net.woggioni.rbcs.cli.impl.converters.ByteSizeConverter
import net.woggioni.rbcs.client.RemoteBuildCacheClient
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.createLogger
import net.woggioni.rbcs.common.debug
import net.woggioni.rbcs.common.error
import net.woggioni.rbcs.common.info
import net.woggioni.jwo.JWO
import picocli.CommandLine
import java.security.SecureRandom
import java.time.Duration
import java.time.Instant
import java.time.temporal.ChronoUnit
import java.util.concurrent.LinkedBlockingQueue
import java.util.concurrent.Semaphore
import java.util.concurrent.atomic.AtomicLong
@@ -21,39 +26,26 @@ import kotlin.random.Random
showDefaultValues = true
)
class BenchmarkCommand : RbcsCommand() {
private val log = contextLogger()
companion object {
private val log = createLogger<BenchmarkCommand>()
@CommandLine.Spec
private lateinit var spec: CommandLine.Model.CommandSpec
@CommandLine.Option(
names = ["-e", "--entries"],
description = ["Total number of elements to be added to the cache"],
paramLabel = "NUMBER_OF_ENTRIES"
)
private var numberOfEntries = 1000
@CommandLine.Option(
names = ["-s", "--size"],
description = ["Size of a cache value in bytes"],
paramLabel = "SIZE"
)
private var size = 0x1000
override fun run() {
val clientCommand = spec.parent().userObject() as ClientCommand
val profile = clientCommand.profileName.let { profileName ->
clientCommand.configuration.profiles[profileName]
?: throw IllegalArgumentException("Profile $profileName does not exist in configuration")
}
fun run(profile : RemoteBuildCacheClient.Configuration.Profile,
numberOfEntries : Int,
entrySize : Int,
useRandomValue : Boolean,
) {
val progressThreshold = LongMath.ceilDiv(numberOfEntries.toLong(), 20)
RemoteBuildCacheClient(profile).use { client ->
val entryGenerator = sequence {
val random = Random(SecureRandom.getInstance("NativePRNGNonBlocking").nextLong())
while (true) {
val key = JWO.bytesToHex(random.nextBytes(16))
val content = random.nextInt().toByte()
val value = ByteArray(size, { _ -> content })
val value = if (useRandomValue) {
random.nextBytes(entrySize)
} else {
val byteValue = random.nextInt().toByte()
ByteArray(entrySize) { _ -> byteValue }
}
yield(key to value)
}
}
@@ -65,13 +57,14 @@ class BenchmarkCommand : RbcsCommand() {
val completionCounter = AtomicLong(0)
val completionQueue = LinkedBlockingQueue<Pair<String, ByteArray>>(numberOfEntries)
val start = Instant.now()
val semaphore = Semaphore(profile.maxConnections * 3)
val semaphore = Semaphore(profile.maxConnections * 5)
val iterator = entryGenerator.take(numberOfEntries).iterator()
while (completionCounter.get() < numberOfEntries) {
if (iterator.hasNext()) {
val entry = iterator.next()
semaphore.acquire()
val future = client.put(entry.first, entry.second).thenApply { entry }
val future =
client.put(entry.first, entry.second, CacheValueMetadata(null, null)).thenApply { entry }
future.whenComplete { result, ex ->
if (ex != null) {
log.error(ex.message, ex)
@@ -79,10 +72,15 @@ class BenchmarkCommand : RbcsCommand() {
completionQueue.put(result)
}
semaphore.release()
completionCounter.incrementAndGet()
val completed = completionCounter.incrementAndGet()
if (completed.mod(progressThreshold) == 0L) {
log.debug {
"Inserted $completed / $numberOfEntries"
}
}
}
} else {
Thread.sleep(0)
Thread.sleep(Duration.of(500, ChronoUnit.MILLIS))
}
}
@@ -103,12 +101,13 @@ class BenchmarkCommand : RbcsCommand() {
}
if (entries.isNotEmpty()) {
val completionCounter = AtomicLong(0)
val semaphore = Semaphore(profile.maxConnections * 3)
val semaphore = Semaphore(profile.maxConnections * 5)
val start = Instant.now()
val it = entries.iterator()
while (completionCounter.get() < entries.size) {
if (it.hasNext()) {
val entry = it.next()
semaphore.acquire()
val future = client.get(entry.first).thenApply {
if (it == null) {
log.error {
@@ -121,11 +120,16 @@ class BenchmarkCommand : RbcsCommand() {
}
}
future.whenComplete { _, _ ->
completionCounter.incrementAndGet()
val completed = completionCounter.incrementAndGet()
if (completed.mod(progressThreshold) == 0L) {
log.debug {
"Retrieved $completed / ${entries.size}"
}
}
semaphore.release()
}
} else {
Thread.sleep(0)
Thread.sleep(Duration.of(500, ChronoUnit.MILLIS))
}
}
val end = Instant.now()
@@ -139,4 +143,43 @@ class BenchmarkCommand : RbcsCommand() {
}
}
}
}
@CommandLine.Spec
private lateinit var spec: CommandLine.Model.CommandSpec
@CommandLine.Option(
names = ["-e", "--entries"],
description = ["Total number of elements to be added to the cache"],
paramLabel = "NUMBER_OF_ENTRIES"
)
private var numberOfEntries = 1000
@CommandLine.Option(
names = ["-s", "--size"],
description = ["Size of a cache value in bytes"],
paramLabel = "SIZE",
converter = [ByteSizeConverter::class]
)
private var size = 0x1000
@CommandLine.Option(
names = ["-r", "--random"],
description = ["Insert completely random byte values"]
)
private var randomValues = false
override fun run() {
val clientCommand = spec.parent().userObject() as ClientCommand
val profile = clientCommand.profileName.let { profileName ->
clientCommand.configuration.profiles[profileName]
?: throw IllegalArgumentException("Profile $profileName does not exist in configuration")
}
run(
profile,
numberOfEntries,
size,
randomValues
)
}
}

View File

@@ -1,8 +1,8 @@
package net.woggioni.rbcs.cli.impl.commands
import net.woggioni.jwo.Application
import net.woggioni.rbcs.cli.impl.RbcsCommand
import net.woggioni.rbcs.client.RemoteBuildCacheClient
import net.woggioni.jwo.Application
import picocli.CommandLine
import java.nio.file.Path

View File

@@ -2,7 +2,7 @@ package net.woggioni.rbcs.cli.impl.commands
import net.woggioni.rbcs.cli.impl.RbcsCommand
import net.woggioni.rbcs.client.RemoteBuildCacheClient
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.createLogger
import picocli.CommandLine
import java.nio.file.Files
import java.nio.file.Path
@@ -13,7 +13,9 @@ import java.nio.file.Path
showDefaultValues = true
)
class GetCommand : RbcsCommand() {
private val log = contextLogger()
companion object{
private val log = createLogger<GetCommand>()
}
@CommandLine.Spec
private lateinit var spec: CommandLine.Model.CommandSpec

View File

@@ -2,7 +2,7 @@ package net.woggioni.rbcs.cli.impl.commands
import net.woggioni.rbcs.cli.impl.RbcsCommand
import net.woggioni.rbcs.client.RemoteBuildCacheClient
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.createLogger
import picocli.CommandLine
import java.security.SecureRandom
import kotlin.random.Random
@@ -13,7 +13,30 @@ import kotlin.random.Random
showDefaultValues = true
)
class HealthCheckCommand : RbcsCommand() {
private val log = contextLogger()
companion object{
private val log = createLogger<HealthCheckCommand>()
fun run(profile : RemoteBuildCacheClient.Configuration.Profile) {
RemoteBuildCacheClient(profile).use { client ->
val random = Random(SecureRandom.getInstance("NativePRNGNonBlocking").nextLong())
val nonce = ByteArray(0xa0)
random.nextBytes(nonce)
client.healthCheck(nonce).thenApply { value ->
if(value == null) {
throw IllegalStateException("Empty response from server")
}
val offset = value.size - nonce.size
for(i in 0 until nonce.size) {
val a = nonce[i]
val b = value[offset + i]
if(a != b) {
throw IllegalStateException("Server nonce does not match")
}
}
}.get()
}
}
}
@CommandLine.Spec
private lateinit var spec: CommandLine.Model.CommandSpec
@@ -24,22 +47,6 @@ class HealthCheckCommand : RbcsCommand() {
clientCommand.configuration.profiles[profileName]
?: throw IllegalArgumentException("Profile $profileName does not exist in configuration")
}
RemoteBuildCacheClient(profile).use { client ->
val random = Random(SecureRandom.getInstance("NativePRNGNonBlocking").nextLong())
val nonce = ByteArray(0xa0)
random.nextBytes(nonce)
client.healthCheck(nonce).thenApply { value ->
if(value == null) {
throw IllegalStateException("Empty response from server")
}
for(i in 0 until nonce.size) {
for(j in value.size - nonce.size until nonce.size) {
if(nonce[i] != value[j]) {
throw IllegalStateException("Server nonce does not match")
}
}
}
}.get()
}
run(profile)
}
}

View File

@@ -1,9 +1,9 @@
package net.woggioni.rbcs.cli.impl.commands
import net.woggioni.jwo.UncloseableOutputStream
import net.woggioni.rbcs.cli.impl.RbcsCommand
import net.woggioni.rbcs.cli.impl.converters.OutputStreamConverter
import net.woggioni.rbcs.common.PasswordSecurity.hashPassword
import net.woggioni.jwo.UncloseableOutputStream
import picocli.CommandLine
import java.io.OutputStream
import java.io.OutputStreamWriter

View File

@@ -1,11 +1,17 @@
package net.woggioni.rbcs.cli.impl.commands
import net.woggioni.jwo.Hash
import net.woggioni.jwo.JWO
import net.woggioni.jwo.NullOutputStream
import net.woggioni.rbcs.api.CacheValueMetadata
import net.woggioni.rbcs.cli.impl.RbcsCommand
import net.woggioni.rbcs.cli.impl.converters.InputStreamConverter
import net.woggioni.rbcs.client.RemoteBuildCacheClient
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.createLogger
import picocli.CommandLine
import java.io.InputStream
import java.nio.file.Files
import java.nio.file.Path
import java.util.UUID
@CommandLine.Command(
name = "put",
@@ -13,25 +19,41 @@ import java.io.InputStream
showDefaultValues = true
)
class PutCommand : RbcsCommand() {
private val log = contextLogger()
companion object{
private val log = createLogger<PutCommand>()
}
@CommandLine.Spec
private lateinit var spec: CommandLine.Model.CommandSpec
@CommandLine.Option(
names = ["-k", "--key"],
description = ["The key for the new value"],
description = ["The key for the new value, randomly generated if omitted"],
paramLabel = "KEY"
)
private var key : String = ""
private var key : String? = null
@CommandLine.Option(
names = ["-i", "--inline"],
description = ["File is to be displayed in the browser"],
paramLabel = "INLINE",
)
private var inline : Boolean = false
@CommandLine.Option(
names = ["-t", "--type"],
description = ["File mime type"],
paramLabel = "MIME_TYPE",
)
private var mimeType : String? = null
@CommandLine.Option(
names = ["-v", "--value"],
description = ["Path to a file containing the value to be added (defaults to stdin)"],
paramLabel = "VALUE_FILE",
converter = [InputStreamConverter::class]
)
private var value : InputStream = System.`in`
private var value : Path? = null
override fun run() {
val clientCommand = spec.parent().userObject() as ClientCommand
@@ -40,9 +62,40 @@ class PutCommand : RbcsCommand() {
?: throw IllegalArgumentException("Profile $profileName does not exist in configuration")
}
RemoteBuildCacheClient(profile).use { client ->
value.use {
client.put(key, it.readAllBytes())
val inputStream : InputStream
val mimeType : String?
val contentDisposition : String?
val valuePath = value
val actualKey : String?
if(valuePath != null) {
inputStream = Files.newInputStream(valuePath)
mimeType = this.mimeType ?: Files.probeContentType(valuePath)
contentDisposition = if(inline) {
"inline"
} else {
"attachment; filename=\"${valuePath.fileName}\""
}
actualKey = key ?: let {
val md = Hash.Algorithm.SHA512.newInputStream(Files.newInputStream(valuePath)).use {
JWO.copy(it, NullOutputStream())
it.messageDigest
}
UUID.nameUUIDFromBytes(md.digest()).toString()
}
} else {
inputStream = System.`in`
mimeType = this.mimeType
contentDisposition = if(inline) {
"inline"
} else {
null
}
actualKey = key ?: UUID.randomUUID().toString()
}
inputStream.use {
client.put(actualKey, it.readAllBytes(), CacheValueMetadata(contentDisposition, mimeType))
}.get()
println(profile.serverURI.resolve(actualKey))
}
}
}

View File

@@ -1,19 +1,20 @@
package net.woggioni.rbcs.cli.impl.commands
import net.woggioni.jwo.Application
import net.woggioni.jwo.JWO
import net.woggioni.rbcs.cli.impl.RbcsCommand
import net.woggioni.rbcs.cli.impl.converters.DurationConverter
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.createLogger
import net.woggioni.rbcs.common.debug
import net.woggioni.rbcs.common.info
import net.woggioni.rbcs.server.RemoteBuildCacheServer
import net.woggioni.rbcs.server.RemoteBuildCacheServer.Companion.DEFAULT_CONFIGURATION_URL
import net.woggioni.jwo.Application
import net.woggioni.jwo.JWO
import picocli.CommandLine
import java.io.ByteArrayOutputStream
import java.nio.file.Files
import java.nio.file.Path
import java.time.Duration
import java.util.concurrent.TimeUnit
@CommandLine.Command(
name = "server",
@@ -21,8 +22,9 @@ import java.time.Duration
showDefaultValues = true
)
class ServerCommand(app : Application) : RbcsCommand() {
private val log = contextLogger()
companion object {
private val log = createLogger<ServerCommand>()
}
private fun createDefaultConfigurationFile(configurationFile: Path) {
log.info {
@@ -57,6 +59,9 @@ class ServerCommand(app : Application) : RbcsCommand() {
createDefaultConfigurationFile(configurationFile)
}
log.debug {
"Using configuration file '$configurationFile'"
}
val configuration = RemoteBuildCacheServer.loadConfiguration(configurationFile)
log.debug {
ByteArrayOutputStream().also {
@@ -66,11 +71,20 @@ class ServerCommand(app : Application) : RbcsCommand() {
}
}
val server = RemoteBuildCacheServer(configuration)
server.run().use { server ->
timeout?.let {
Thread.sleep(it)
server.shutdown()
val handle = server.run()
val shutdownHook = Thread.ofPlatform().unstarted {
handle.sendShutdownSignal()
try {
handle.get(60, TimeUnit.SECONDS)
} catch (ex : Throwable) {
log.warn(ex.message, ex)
}
}
Runtime.getRuntime().addShutdownHook(shutdownHook)
if(timeout != null) {
Thread.sleep(timeout)
handle.sendShutdownSignal()
}
handle.get()
}
}

View File

@@ -0,0 +1,10 @@
package net.woggioni.rbcs.cli.impl.converters
import picocli.CommandLine
class ByteSizeConverter : CommandLine.ITypeConverter<Int> {
override fun convert(value: String): Int {
return Integer.decode(value)
}
}

View File

@@ -4,7 +4,9 @@ import io.netty.bootstrap.Bootstrap
import io.netty.buffer.ByteBuf
import io.netty.buffer.Unpooled
import io.netty.channel.Channel
import io.netty.channel.ChannelHandler
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.ChannelInboundHandlerAdapter
import io.netty.channel.ChannelOption
import io.netty.channel.ChannelPipeline
import io.netty.channel.SimpleChannelInboundHandler
@@ -28,13 +30,19 @@ import io.netty.handler.codec.http.HttpVersion
import io.netty.handler.ssl.SslContext
import io.netty.handler.ssl.SslContextBuilder
import io.netty.handler.stream.ChunkedWriteHandler
import io.netty.handler.timeout.IdleState
import io.netty.handler.timeout.IdleStateEvent
import io.netty.handler.timeout.IdleStateHandler
import io.netty.util.concurrent.Future
import io.netty.util.concurrent.GenericFutureListener
import net.woggioni.rbcs.api.CacheValueMetadata
import net.woggioni.rbcs.client.impl.Parser
import net.woggioni.rbcs.common.RBCS.loadKeystore
import net.woggioni.rbcs.common.Xml
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.createLogger
import net.woggioni.rbcs.common.debug
import net.woggioni.rbcs.common.trace
import java.io.IOException
import java.net.InetSocketAddress
import java.net.URI
import java.nio.file.Files
@@ -44,14 +52,21 @@ import java.security.cert.X509Certificate
import java.time.Duration
import java.util.Base64
import java.util.concurrent.CompletableFuture
import java.util.concurrent.TimeUnit
import java.util.concurrent.TimeoutException
import java.util.concurrent.atomic.AtomicInteger
import javax.net.ssl.TrustManagerFactory
import javax.net.ssl.X509TrustManager
import kotlin.random.Random
import io.netty.util.concurrent.Future as NettyFuture
class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoCloseable {
companion object{
private val log = createLogger<RemoteBuildCacheClient>()
}
private val group: NioEventLoopGroup
private var sslContext: SslContext
private val log = contextLogger()
private val sslContext: SslContext
private val pool: ChannelPool
data class Configuration(
@@ -66,18 +81,36 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
data class BasicAuthenticationCredentials(val username: String, val password: String) : Authentication()
}
class TrustStore (
var file: Path?,
var password: String?,
var checkCertificateStatus: Boolean = false,
var verifyServerCertificate: Boolean = true,
)
class RetryPolicy(
val maxAttempts: Int,
val initialDelayMillis: Long,
val exp: Double
)
class Connection(
val readTimeout: Duration,
val writeTimeout: Duration,
val idleTimeout: Duration,
val readIdleTimeout: Duration,
val writeIdleTimeout: Duration
)
data class Profile(
val serverURI: URI,
val connection: Connection?,
val authentication: Authentication?,
val connectionTimeout: Duration?,
val maxConnections: Int,
val compressionEnabled: Boolean,
val retryPolicy: RetryPolicy?,
val tlsTruststore : TrustStore?
)
companion object {
@@ -93,10 +126,33 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
group = NioEventLoopGroup()
sslContext = SslContextBuilder.forClient().also { builder ->
(profile.authentication as? Configuration.Authentication.TlsClientAuthenticationCredentials)?.let { tlsClientAuthenticationCredentials ->
builder.keyManager(
builder.apply {
keyManager(
tlsClientAuthenticationCredentials.key,
*tlsClientAuthenticationCredentials.certificateChain
)
profile.tlsTruststore?.let { trustStore ->
if(!trustStore.verifyServerCertificate) {
trustManager(object : X509TrustManager {
override fun checkClientTrusted(certChain: Array<out X509Certificate>, p1: String?) {
}
override fun checkServerTrusted(certChain: Array<out X509Certificate>, p1: String?) {
}
override fun getAcceptedIssuers() = null
})
} else {
trustStore.file?.let {
val ts = loadKeystore(it, trustStore.password)
val trustManagerFactory: TrustManagerFactory =
TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm())
trustManagerFactory.init(ts)
trustManager(trustManagerFactory)
}
}
}
}
}
}.build()
@@ -141,18 +197,50 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
}
override fun channelCreated(ch: Channel) {
val connectionId = connectionCount.getAndIncrement()
val connectionId = connectionCount.incrementAndGet()
log.debug {
"Created connection $connectionId, total number of active connections: $connectionId"
"Created connection ${ch.id().asShortText()}, total number of active connections: $connectionId"
}
ch.closeFuture().addListener {
val activeConnections = connectionCount.decrementAndGet()
log.debug {
"Closed connection $connectionId, total number of active connections: $activeConnections"
"Closed connection ${
ch.id().asShortText()
}, total number of active connections: $activeConnections"
}
}
val pipeline: ChannelPipeline = ch.pipeline()
profile.connection?.also { conn ->
val readTimeout = conn.readTimeout.toMillis()
val writeTimeout = conn.writeTimeout.toMillis()
if (readTimeout > 0 || writeTimeout > 0) {
pipeline.addLast(
IdleStateHandler(
false,
readTimeout,
writeTimeout,
0,
TimeUnit.MILLISECONDS
)
)
}
val readIdleTimeout = conn.readIdleTimeout.toMillis()
val writeIdleTimeout = conn.writeIdleTimeout.toMillis()
val idleTimeout = conn.idleTimeout.toMillis()
if (readIdleTimeout > 0 || writeIdleTimeout > 0 || idleTimeout > 0) {
pipeline.addLast(
IdleStateHandler(
true,
readIdleTimeout,
writeIdleTimeout,
idleTimeout,
TimeUnit.MILLISECONDS
)
)
}
}
// Add SSL handler if needed
if ("https".equals(scheme, ignoreCase = true)) {
pipeline.addLast("ssl", sslContext.newHandler(ch.alloc(), host, port))
@@ -160,7 +248,9 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
// HTTP handlers
pipeline.addLast("codec", HttpClientCodec())
if(profile.compressionEnabled) {
pipeline.addLast("decompressor", HttpContentDecompressor())
}
pipeline.addLast("aggregator", HttpObjectAggregator(134217728))
pipeline.addLast("chunked", ChunkedWriteHandler())
}
@@ -206,6 +296,7 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
retryPolicy.initialDelayMillis.toDouble(),
retryPolicy.exp,
outcomeHandler,
Random.Default,
operation
)
} else {
@@ -253,9 +344,13 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
}
}
fun put(key: String, content: ByteArray): CompletableFuture<Unit> {
fun put(key: String, content: ByteArray, metadata: CacheValueMetadata): CompletableFuture<Unit> {
return executeWithRetry {
sendRequest(profile.serverURI.resolve(key), HttpMethod.PUT, content)
val extraHeaders = sequenceOf(
metadata.mimeType?.let { HttpHeaderNames.CONTENT_TYPE to it },
metadata.contentDisposition?.let { HttpHeaderNames.CONTENT_DISPOSITION to it }
).filterNotNull()
sendRequest(profile.serverURI.resolve(key), HttpMethod.PUT, content, extraHeaders.asIterable())
}.thenApply {
val status = it.status()
if (it.status() != HttpResponseStatus.CREATED && it.status() != HttpResponseStatus.OK) {
@@ -264,35 +359,83 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
}
}
private fun sendRequest(uri: URI, method: HttpMethod, body: ByteArray?): CompletableFuture<FullHttpResponse> {
private fun sendRequest(
uri: URI,
method: HttpMethod,
body: ByteArray?,
extraHeaders: Iterable<Pair<CharSequence, CharSequence>>? = null
): CompletableFuture<FullHttpResponse> {
val responseFuture = CompletableFuture<FullHttpResponse>()
// Custom handler for processing responses
pool.acquire().addListener(object : GenericFutureListener<NettyFuture<Channel>> {
private val handlers = mutableListOf<ChannelHandler>()
fun cleanup(channel: Channel, pipeline: ChannelPipeline) {
handlers.forEach(pipeline::remove)
pool.release(channel)
}
override fun operationComplete(channelFuture: Future<Channel>) {
if (channelFuture.isSuccess) {
val channel = channelFuture.now
val pipeline = channel.pipeline()
channel.pipeline().addLast("handler", object : SimpleChannelInboundHandler<FullHttpResponse>() {
val timeoutHandler = object : ChannelInboundHandlerAdapter() {
override fun userEventTriggered(ctx: ChannelHandlerContext, evt: Any) {
if (evt is IdleStateEvent) {
val te = when (evt.state()) {
IdleState.READER_IDLE -> TimeoutException(
"Read timeout",
)
IdleState.WRITER_IDLE -> TimeoutException("Write timeout")
IdleState.ALL_IDLE -> TimeoutException("Idle timeout")
null -> throw IllegalStateException("This should never happen")
}
responseFuture.completeExceptionally(te)
ctx.close()
}
}
}
val closeListener = GenericFutureListener<Future<Void>> {
responseFuture.completeExceptionally(IOException("The remote server closed the connection"))
pool.release(channel)
}
val responseHandler = object : SimpleChannelInboundHandler<FullHttpResponse>() {
override fun channelRead0(
ctx: ChannelHandlerContext,
response: FullHttpResponse
) {
pipeline.removeLast()
pool.release(channel)
channel.closeFuture().removeListener(closeListener)
cleanup(channel, pipeline)
responseFuture.complete(response)
}
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
ctx.newPromise()
val ex = when (cause) {
is DecoderException -> cause.cause
else -> cause
}
responseFuture.completeExceptionally(ex)
ctx.close()
pipeline.removeLast()
pool.release(channel)
}
})
override fun channelInactive(ctx: ChannelHandlerContext) {
pool.release(channel)
responseFuture.completeExceptionally(IOException("The remote server closed the connection"))
super.channelInactive(ctx)
}
}
for (handler in arrayOf(timeoutHandler, responseHandler)) {
handlers.add(handler)
}
pipeline.addLast(timeoutHandler, responseHandler)
channel.closeFuture().addListener(closeListener)
// Prepare the HTTP request
val request: FullHttpRequest = let {
val content: ByteBuf? = body?.takeIf(ByteArray::isNotEmpty)?.let(Unpooled::wrappedBuffer)
@@ -304,15 +447,19 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
).apply {
headers().apply {
if (content != null) {
set(HttpHeaderNames.CONTENT_TYPE, HttpHeaderValues.APPLICATION_OCTET_STREAM)
set(HttpHeaderNames.CONTENT_LENGTH, content.readableBytes())
}
set(HttpHeaderNames.HOST, profile.serverURI.host)
set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE)
if(profile.compressionEnabled) {
set(
HttpHeaderNames.ACCEPT_ENCODING,
HttpHeaderValues.GZIP.toString() + "," + HttpHeaderValues.DEFLATE.toString()
)
}
extraHeaders?.forEach { (k, v) ->
add(k, v)
}
// Add basic auth if configured
(profile.authentication as? Configuration.Authentication.BasicAuthenticationCredentials)?.let { credentials ->
val auth = "${credentials.username}:${credentials.password}"

View File

@@ -12,6 +12,7 @@ import java.security.KeyStore
import java.security.PrivateKey
import java.security.cert.X509Certificate
import java.time.Duration
import java.time.temporal.ChronoUnit
object Parser {
@@ -29,6 +30,8 @@ object Parser {
?: throw ConfigurationException("base-url attribute is required")
var authentication: RemoteBuildCacheClient.Configuration.Authentication? = null
var retryPolicy: RemoteBuildCacheClient.Configuration.RetryPolicy? = null
var connection : RemoteBuildCacheClient.Configuration.Connection? = null
var trustStore : RemoteBuildCacheClient.Configuration.TrustStore? = null
for (gchild in child.asIterable()) {
when (gchild.localName) {
"tls-client-auth" -> {
@@ -86,6 +89,37 @@ object Parser {
exp.toDouble()
)
}
"connection" -> {
val writeTimeout = gchild.renderAttribute("write-timeout")
?.let(Duration::parse) ?: Duration.of(0, ChronoUnit.SECONDS)
val readTimeout = gchild.renderAttribute("read-timeout")
?.let(Duration::parse) ?: Duration.of(0, ChronoUnit.SECONDS)
val idleTimeout = gchild.renderAttribute("idle-timeout")
?.let(Duration::parse) ?: Duration.of(30, ChronoUnit.SECONDS)
val readIdleTimeout = gchild.renderAttribute("read-idle-timeout")
?.let(Duration::parse) ?: Duration.of(60, ChronoUnit.SECONDS)
val writeIdleTimeout = gchild.renderAttribute("write-idle-timeout")
?.let(Duration::parse) ?: Duration.of(60, ChronoUnit.SECONDS)
connection = RemoteBuildCacheClient.Configuration.Connection(
readTimeout,
writeTimeout,
idleTimeout,
readIdleTimeout,
writeIdleTimeout,
)
}
"tls-trust-store" -> {
val file = gchild.renderAttribute("file")
?.let(Path::of)
val password = gchild.renderAttribute("password")
val checkCertificateStatus = gchild.renderAttribute("check-certificate-status")
?.let(String::toBoolean) ?: false
val verifyServerCertificate = gchild.renderAttribute("verify-server-certificate")
?.let(String::toBoolean) ?: true
trustStore = RemoteBuildCacheClient.Configuration.TrustStore(file, password, checkCertificateStatus, verifyServerCertificate)
}
}
}
val maxConnections = child.renderAttribute("max-connections")
@@ -93,12 +127,19 @@ object Parser {
?: 50
val connectionTimeout = child.renderAttribute("connection-timeout")
?.let(Duration::parse)
val compressionEnabled = child.renderAttribute("enable-compression")
?.let(String::toBoolean)
?: true
profiles[name] = RemoteBuildCacheClient.Configuration.Profile(
uri,
connection,
authentication,
connectionTimeout,
maxConnections,
retryPolicy
compressionEnabled,
retryPolicy,
trustStore
)
}
}

View File

@@ -3,6 +3,8 @@ package net.woggioni.rbcs.client
import io.netty.util.concurrent.EventExecutorGroup
import java.util.concurrent.CompletableFuture
import java.util.concurrent.TimeUnit
import kotlin.math.pow
import kotlin.random.Random
sealed class OperationOutcome<T> {
class Success<T>(val result: T) : OperationOutcome<T>()
@@ -24,8 +26,10 @@ fun <T> executeWithRetry(
initialDelay: Double,
exp: Double,
outcomeHandler: OutcomeHandler<T>,
randomizer : Random?,
cb: () -> CompletableFuture<T>
): CompletableFuture<T> {
val finalResult = cb()
var future = finalResult
var shortCircuit = false
@@ -46,7 +50,7 @@ fun <T> executeWithRetry(
is OutcomeHandlerResult.Retry -> {
val res = CompletableFuture<T>()
val delay = run {
val scheduledDelay = (initialDelay * Math.pow(exp, i.toDouble())).toLong()
val scheduledDelay = (initialDelay * exp.pow(i.toDouble()) * (1.0 + (randomizer?.nextDouble(-0.5, 0.5) ?: 0.0))).toLong()
outcomeHandlerResult.suggestedDelayMillis?.coerceAtMost(scheduledDelay) ?: scheduledDelay
}
eventExecutorGroup.schedule({

View File

@@ -19,12 +19,23 @@
<xs:element name="basic-auth" type="rbcs-client:basicAuthType"/>
<xs:element name="tls-client-auth" type="rbcs-client:tlsClientAuthType"/>
</xs:choice>
<xs:element name="connection" type="rbcs-client:connectionType" minOccurs="0" />
<xs:element name="retry-policy" type="rbcs-client:retryType" minOccurs="0"/>
<xs:element name="tls-trust-store" type="rbcs-client:trustStoreType" minOccurs="0"/>
</xs:sequence>
<xs:attribute name="name" type="xs:token" use="required"/>
<xs:attribute name="base-url" type="xs:anyURI" use="required"/>
<xs:attribute name="max-connections" type="xs:positiveInteger" default="50"/>
<xs:attribute name="connection-timeout" type="xs:duration"/>
<xs:attribute name="enable-compression" type="xs:boolean" default="true"/>
</xs:complexType>
<xs:complexType name="connectionType">
<xs:attribute name="read-timeout" type="xs:duration" use="optional" default="PT0S"/>
<xs:attribute name="write-timeout" type="xs:duration" use="optional" default="PT0S"/>
<xs:attribute name="idle-timeout" type="xs:duration" use="optional" default="PT30S"/>
<xs:attribute name="read-idle-timeout" type="xs:duration" use="optional" default="PT60S"/>
<xs:attribute name="write-idle-timeout" type="xs:duration" use="optional" default="PT60S"/>
</xs:complexType>
<xs:complexType name="noAuthType"/>
@@ -47,4 +58,34 @@
<xs:attribute name="exp" type="xs:double" default="2.0"/>
</xs:complexType>
<xs:complexType name="trustStoreType">
<xs:attribute name="file" type="xs:string" use="required">
<xs:annotation>
<xs:documentation>
Path to the trustore file
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="password" type="xs:string">
<xs:annotation>
<xs:documentation>
Trustore file password
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="check-certificate-status" type="xs:boolean">
<xs:annotation>
<xs:documentation>
Whether or not check the certificate validity using CRL/OCSP
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="verify-server-certificate" type="xs:boolean" use="optional" default="true">
<xs:annotation>
<xs:documentation>
If false, the client will blindly trust the provided server certificate
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:complexType>
</xs:schema>

View File

@@ -89,7 +89,7 @@ class RetryTest {
val random = Random(testArgs.seed)
val future =
executeWithRetry(executor, testArgs.maxAttempt, testArgs.initialDelay, testArgs.exp, outcomeHandler) {
executeWithRetry(executor, testArgs.maxAttempt, testArgs.initialDelay, testArgs.exp, outcomeHandler, null) {
val now = System.nanoTime()
val result = CompletableFuture<Int>()
executor.submit {
@@ -129,7 +129,7 @@ class RetryTest {
previousAttempt.first + testArgs.initialDelay * Math.pow(testArgs.exp, index.toDouble()) * 1e6
val actualTimestamp = timestamp
val err = Math.abs(expectedTimestamp - actualTimestamp) / expectedTimestamp
Assertions.assertTrue(err < 1e-3)
Assertions.assertTrue(err < 1e-2)
}
if (index == attempts.size - 1 && index < testArgs.maxAttempt - 1) {
/*

View File

@@ -9,6 +9,8 @@
key-store-password="password"
key-alias="woggioni@c962475fa38"
key-password="key-password"/>
<connection write-idle-timeout="PT60S" read-idle-timeout="PT60S" write-timeout="PT0S" read-timeout="PT0S" idle-timeout="PT30S" />
<tls-trust-store file="file.pfx" password="password" check-certificate-status="false" verify-server-certificate="true"/>
</profile>
<profile name="profile2" base-url="https://rbcs2.example.com/">
<basic-auth user="user" password="password"/>

View File

@@ -5,6 +5,7 @@ module net.woggioni.rbcs.common {
requires kotlin.stdlib;
requires net.woggioni.jwo;
requires io.netty.buffer;
requires io.netty.transport;
provides java.net.spi.URLStreamHandlerProvider with net.woggioni.rbcs.common.RbcsUrlStreamHandlerFactory;
exports net.woggioni.rbcs.common;

View File

@@ -0,0 +1,15 @@
package net.woggioni.rbcs.common
import io.netty.buffer.ByteBuf
import io.netty.buffer.ByteBufAllocator
import io.netty.buffer.CompositeByteBuf
fun extractChunk(buf: CompositeByteBuf, alloc: ByteBufAllocator): ByteBuf {
val chunk = alloc.compositeBuffer()
for (component in buf.decompose(0, buf.readableBytes())) {
chunk.addComponent(true, component.retain())
}
buf.removeComponents(0, buf.numComponents())
buf.clear()
return chunk
}

View File

@@ -1,90 +1,173 @@
package net.woggioni.rbcs.common
import io.netty.channel.Channel
import io.netty.channel.ChannelHandlerContext
import org.slf4j.Logger
import org.slf4j.LoggerFactory
import org.slf4j.MDC
import org.slf4j.event.Level
import org.slf4j.spi.LoggingEventBuilder
import java.nio.file.Files
import java.nio.file.Path
import java.util.logging.LogManager
inline fun <reified T> T.contextLogger() = LoggerFactory.getLogger(T::class.java)
inline fun <reified T> createLogger() = LoggerFactory.getLogger(T::class.java)
inline fun Logger.traceParam(messageBuilder : () -> Pair<String, Array<Any>>) {
if(isTraceEnabled) {
inline fun Logger.traceParam(messageBuilder: () -> Pair<String, Array<Any>>) {
if (isTraceEnabled) {
val (format, params) = messageBuilder()
trace(format, params)
}
}
inline fun Logger.debugParam(messageBuilder : () -> Pair<String, Array<Any>>) {
if(isDebugEnabled) {
inline fun Logger.debugParam(messageBuilder: () -> Pair<String, Array<Any>>) {
if (isDebugEnabled) {
val (format, params) = messageBuilder()
info(format, params)
}
}
inline fun Logger.infoParam(messageBuilder : () -> Pair<String, Array<Any>>) {
if(isInfoEnabled) {
inline fun Logger.infoParam(messageBuilder: () -> Pair<String, Array<Any>>) {
if (isInfoEnabled) {
val (format, params) = messageBuilder()
info(format, params)
}
}
inline fun Logger.warnParam(messageBuilder : () -> Pair<String, Array<Any>>) {
if(isWarnEnabled) {
inline fun Logger.warnParam(messageBuilder: () -> Pair<String, Array<Any>>) {
if (isWarnEnabled) {
val (format, params) = messageBuilder()
warn(format, params)
}
}
inline fun Logger.errorParam(messageBuilder : () -> Pair<String, Array<Any>>) {
if(isErrorEnabled) {
inline fun Logger.errorParam(messageBuilder: () -> Pair<String, Array<Any>>) {
if (isErrorEnabled) {
val (format, params) = messageBuilder()
error(format, params)
}
}
inline fun log(log : Logger,
filter : Logger.() -> Boolean,
loggerMethod : Logger.(String) -> Unit, messageBuilder : () -> String) {
if(log.filter()) {
inline fun log(
log: Logger,
filter: Logger.() -> Boolean,
loggerMethod: Logger.(String) -> Unit, messageBuilder: () -> String
) {
if (log.filter()) {
log.loggerMethod(messageBuilder())
}
}
inline fun Logger.log(level : Level, messageBuilder : () -> String) {
if(isEnabledForLevel(level)) {
fun withMDC(params: Array<Pair<String, String>>, cb: () -> Unit) {
object : AutoCloseable {
override fun close() {
for ((key, _) in params) MDC.remove(key)
}
}.use {
for ((key, value) in params) MDC.put(key, value)
cb()
}
}
inline fun Logger.log(level: Level, channel: Channel, crossinline messageBuilder: (LoggingEventBuilder) -> Unit ) {
if (isEnabledForLevel(level)) {
val params = arrayOf<Pair<String, String>>(
"channel-id-short" to channel.id().asShortText(),
"channel-id-long" to channel.id().asLongText(),
"remote-address" to channel.remoteAddress().toString(),
"local-address" to channel.localAddress().toString(),
)
withMDC(params) {
val builder = makeLoggingEventBuilder(level)
// for ((key, value) in params) {
// builder.addKeyValue(key, value)
// }
messageBuilder(builder)
builder.log()
}
}
}
inline fun Logger.log(level: Level, channel: Channel, crossinline messageBuilder: () -> String) {
log(level, channel) { builder ->
builder.setMessage(messageBuilder())
}
}
inline fun Logger.trace(ch: Channel, crossinline messageBuilder: () -> String) {
log(Level.TRACE, ch, messageBuilder)
}
inline fun Logger.debug(ch: Channel, crossinline messageBuilder: () -> String) {
log(Level.DEBUG, ch, messageBuilder)
}
inline fun Logger.info(ch: Channel, crossinline messageBuilder: () -> String) {
log(Level.INFO, ch, messageBuilder)
}
inline fun Logger.warn(ch: Channel, crossinline messageBuilder: () -> String) {
log(Level.WARN, ch, messageBuilder)
}
inline fun Logger.error(ch: Channel, crossinline messageBuilder: () -> String) {
log(Level.ERROR, ch, messageBuilder)
}
inline fun Logger.trace(ctx: ChannelHandlerContext, crossinline messageBuilder: () -> String) {
log(Level.TRACE, ctx.channel(), messageBuilder)
}
inline fun Logger.debug(ctx: ChannelHandlerContext, crossinline messageBuilder: () -> String) {
log(Level.DEBUG, ctx.channel(), messageBuilder)
}
inline fun Logger.info(ctx: ChannelHandlerContext, crossinline messageBuilder: () -> String) {
log(Level.INFO, ctx.channel(), messageBuilder)
}
inline fun Logger.warn(ctx: ChannelHandlerContext, crossinline messageBuilder: () -> String) {
log(Level.WARN, ctx.channel(), messageBuilder)
}
inline fun Logger.error(ctx: ChannelHandlerContext, crossinline messageBuilder: () -> String) {
log(Level.ERROR, ctx.channel(), messageBuilder)
}
inline fun Logger.log(level: Level, messageBuilder: () -> String) {
if (isEnabledForLevel(level)) {
makeLoggingEventBuilder(level).log(messageBuilder())
}
}
inline fun Logger.trace(messageBuilder : () -> String) {
if(isTraceEnabled) {
inline fun Logger.trace(messageBuilder: () -> String) {
if (isTraceEnabled) {
trace(messageBuilder())
}
}
inline fun Logger.debug(messageBuilder : () -> String) {
if(isDebugEnabled) {
inline fun Logger.debug(messageBuilder: () -> String) {
if (isDebugEnabled) {
debug(messageBuilder())
}
}
inline fun Logger.info(messageBuilder : () -> String) {
if(isInfoEnabled) {
inline fun Logger.info(messageBuilder: () -> String) {
if (isInfoEnabled) {
info(messageBuilder())
}
}
inline fun Logger.warn(messageBuilder : () -> String) {
if(isWarnEnabled) {
inline fun Logger.warn(messageBuilder: () -> String) {
if (isWarnEnabled) {
warn(messageBuilder())
}
}
inline fun Logger.error(messageBuilder : () -> String) {
if(isErrorEnabled) {
inline fun Logger.error(messageBuilder: () -> String) {
if (isErrorEnabled) {
error(messageBuilder())
}
}
@@ -94,9 +177,9 @@ class LoggingConfig {
init {
val logManager = LogManager.getLogManager()
System.getProperty("log.config.source")?.let withSource@ { source ->
System.getProperty("log.config.source")?.let withSource@{ source ->
val urls = LoggingConfig::class.java.classLoader.getResources(source)
while(urls.hasMoreElements()) {
while (urls.hasMoreElements()) {
val url = urls.nextElement()
url.openStream().use { inputStream ->
logManager.readConfiguration(inputStream)

View File

@@ -7,7 +7,18 @@ import javax.crypto.SecretKeyFactory
import javax.crypto.spec.PBEKeySpec
object PasswordSecurity {
private const val KEY_LENGTH = 256
enum class Algorithm(
val codeName : String,
val keyLength : Int,
val iterations : Int) {
PBEWithHmacSHA512_224AndAES_256("PBEWithHmacSHA512/224AndAES_256", 64, 1),
PBEWithHmacSHA1AndAES_256("PBEWithHmacSHA1AndAES_256",64, 1),
PBEWithHmacSHA384AndAES_128("PBEWithHmacSHA384AndAES_128", 64,1),
PBEWithHmacSHA384AndAES_256("PBEWithHmacSHA384AndAES_256",64,1),
PBKDF2WithHmacSHA512("PBKDF2WithHmacSHA512",512, 1),
PBKDF2WithHmacSHA384("PBKDF2WithHmacSHA384",384, 1);
}
private fun concat(arr1: ByteArray, arr2: ByteArray): ByteArray {
val result = ByteArray(arr1.size + arr2.size)
@@ -23,22 +34,22 @@ object PasswordSecurity {
return result
}
fun hashPassword(password : String, salt : String? = null) : String {
fun hashPassword(password : String, salt : String? = null, algorithm : Algorithm = Algorithm.PBKDF2WithHmacSHA512) : String {
val actualSalt = salt?.let(Base64.getDecoder()::decode) ?: SecureRandom().run {
val result = ByteArray(16)
nextBytes(result)
result
}
val spec: KeySpec = PBEKeySpec(password.toCharArray(), actualSalt, 10, KEY_LENGTH)
val factory = SecretKeyFactory.getInstance("PBKDF2WithHmacSHA1")
val spec: KeySpec = PBEKeySpec(password.toCharArray(), actualSalt, algorithm.iterations, algorithm.keyLength)
val factory = SecretKeyFactory.getInstance(algorithm.codeName)
val hash = factory.generateSecret(spec).encoded
return String(Base64.getEncoder().encode(concat(hash, actualSalt)))
}
fun decodePasswordHash(passwordHash : String) : Pair<ByteArray, ByteArray> {
val decoded = Base64.getDecoder().decode(passwordHash)
val hash = ByteArray(KEY_LENGTH / 8)
val salt = ByteArray(decoded.size - KEY_LENGTH / 8)
fun decodePasswordHash(encodedPasswordHash : String, algorithm: Algorithm = Algorithm.PBKDF2WithHmacSHA512) : Pair<ByteArray, ByteArray> {
val decoded = Base64.getDecoder().decode(encodedPasswordHash)
val hash = ByteArray(algorithm.keyLength / 8)
val salt = ByteArray(decoded.size - algorithm.keyLength / 8)
System.arraycopy(decoded, 0, hash, 0, hash.size)
System.arraycopy(decoded, hash.size, salt, 0, salt.size)
return hash to salt

View File

@@ -1,9 +1,26 @@
package net.woggioni.rbcs.common
import net.woggioni.jwo.JWO
import net.woggioni.jwo.Tuple2
import java.io.IOException
import java.net.InetAddress
import java.net.ServerSocket
import java.net.URI
import java.net.URL
import java.nio.file.Files
import java.nio.file.Path
import java.security.KeyStore
import java.security.MessageDigest
import java.security.cert.CertPathValidator
import java.security.cert.CertPathValidatorException
import java.security.cert.CertificateException
import java.security.cert.CertificateFactory
import java.security.cert.PKIXParameters
import java.security.cert.PKIXRevocationChecker
import java.security.cert.X509Certificate
import java.util.EnumSet
import javax.net.ssl.TrustManagerFactory
import javax.net.ssl.X509TrustManager
object RBCS {
fun String.toUrl() : URL = URL.of(URI(this), null)
@@ -12,9 +29,27 @@ object RBCS {
const val RBCS_PREFIX: String = "rbcs"
const val XML_SCHEMA_NAMESPACE_URI = "http://www.w3.org/2001/XMLSchema-instance"
fun ByteArray.toInt(index : Int = 0) : Long {
if(index + 4 > size) throw IllegalArgumentException("Not enough bytes to decode a 32 bits integer")
var value : Long = 0
for (b in index until index + 4) {
value = (value shl 8) + (get(b).toInt() and 0xFF)
}
return value
}
fun ByteArray.toLong(index : Int = 0) : Long {
if(index + 8 > size) throw IllegalArgumentException("Not enough bytes to decode a 64 bits long integer")
var value : Long = 0
for (b in index until index + 8) {
value = (value shl 8) + (get(b).toInt() and 0xFF)
}
return value
}
fun digest(
data: ByteArray,
md: MessageDigest = MessageDigest.getInstance("MD5")
md: MessageDigest
): ByteArray {
md.update(data)
return md.digest()
@@ -22,8 +57,104 @@ object RBCS {
fun digestString(
data: ByteArray,
md: MessageDigest = MessageDigest.getInstance("MD5")
md: MessageDigest
): String {
return JWO.bytesToHex(digest(data, md))
}
fun processCacheKey(key: String, digestAlgorithm: String?) = digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digest(key.toByteArray(), md)
} ?: key.toByteArray(Charsets.UTF_8)
fun Long.toIntOrNull(): Int? {
return if (this >= Int.MIN_VALUE && this <= Int.MAX_VALUE) {
toInt()
} else {
null
}
}
fun getFreePort(): Int {
var count = 0
while (count < 50) {
try {
ServerSocket(0, 50, InetAddress.getLocalHost()).use { serverSocket ->
val candidate = serverSocket.localPort
if (candidate > 0) {
return candidate
} else {
throw RuntimeException("Got invalid port number: $candidate")
}
}
} catch (ignored: IOException) {
++count
}
}
throw RuntimeException("Error trying to find an open port")
}
fun loadKeystore(file: Path, password: String?): KeyStore {
val ext = JWO.splitExtension(file)
.map(Tuple2<String, String>::get_2)
.orElseThrow {
IllegalArgumentException(
"Keystore file '${file}' must have .jks, .p12, .pfx extension"
)
}
val keystore = when (ext.substring(1).lowercase()) {
"jks" -> KeyStore.getInstance("JKS")
"p12", "pfx" -> KeyStore.getInstance("PKCS12")
else -> throw IllegalArgumentException(
"Keystore file '${file}' must have .jks, .p12, .pfx extension"
)
}
Files.newInputStream(file).use {
keystore.load(it, password?.let(String::toCharArray))
}
return keystore
}
fun getTrustManager(trustStore: KeyStore?, certificateRevocationEnabled: Boolean): X509TrustManager {
return if (trustStore != null) {
val certificateFactory = CertificateFactory.getInstance("X.509")
val validator = CertPathValidator.getInstance("PKIX").apply {
val rc = revocationChecker as PKIXRevocationChecker
rc.options = EnumSet.of(
PKIXRevocationChecker.Option.NO_FALLBACK
)
}
val params = PKIXParameters(trustStore).apply {
isRevocationEnabled = certificateRevocationEnabled
}
object : X509TrustManager {
override fun checkClientTrusted(chain: Array<out X509Certificate>, authType: String) {
val clientCertificateChain = certificateFactory.generateCertPath(chain.toList())
try {
validator.validate(clientCertificateChain, params)
} catch (ex: CertPathValidatorException) {
throw CertificateException(ex)
}
}
override fun checkServerTrusted(chain: Array<out X509Certificate>, authType: String) {
throw NotImplementedError()
}
private val acceptedIssuers = trustStore.aliases().asSequence()
.filter(trustStore::isCertificateEntry)
.map(trustStore::getCertificate)
.map { it as X509Certificate }
.toList()
.toTypedArray()
override fun getAcceptedIssuers() = acceptedIssuers
}
} else {
val trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm())
trustManagerFactory.trustManagers.asSequence().filter { it is X509TrustManager }
.single() as X509TrustManager
}
}
}

View File

@@ -1,7 +1,6 @@
package net.woggioni.rbcs.common
import net.woggioni.jwo.JWO
import org.slf4j.LoggerFactory
import org.slf4j.event.Level
import org.w3c.dom.Document
import org.w3c.dom.Element
@@ -79,7 +78,7 @@ class Xml(val doc: Document, val element: Element) {
class ErrorHandler(private val fileURL: URL) : ErrHandler {
companion object {
private val log = LoggerFactory.getLogger(ErrorHandler::class.java)
private val log = createLogger<ErrorHandler>()
}
override fun warning(ex: SAXParseException)= err(ex, Level.WARN)

View File

@@ -0,0 +1,38 @@
package net.woggioni.rbcs.common
import net.woggioni.rbcs.common.PasswordSecurity.decodePasswordHash
import net.woggioni.rbcs.common.PasswordSecurity.hashPassword
import org.junit.jupiter.api.Assertions
import org.junit.jupiter.api.Test
import org.junit.jupiter.params.ParameterizedTest
import org.junit.jupiter.params.provider.EnumSource
import java.security.Provider
import java.security.Security
import java.util.Base64
class PasswordHashingTest {
@EnumSource(PasswordSecurity.Algorithm::class)
@ParameterizedTest
fun test(algo: PasswordSecurity.Algorithm) {
val password = "password"
val encoded = hashPassword(password, algorithm = algo)
val (_, salt) = decodePasswordHash(encoded, algo)
Assertions.assertEquals(encoded,
hashPassword(password, salt = salt.let(Base64.getEncoder()::encodeToString), algorithm = algo)
)
}
@Test
fun listAvailableAlgorithms() {
Security.getProviders().asSequence()
.flatMap { provider: Provider -> provider.services.asSequence() }
.filter { service: Provider.Service -> "SecretKeyFactory" == service.type }
.map(Provider.Service::getAlgorithm)
.forEach {
println(it)
}
}
}

View File

@@ -0,0 +1,46 @@
# RBCS Memcache plugins
This plugins allows RBCs to store and retrieve data from a memcache cluster.
The memcache server selection is simply based on the hash of the key,
deflate compression is also supported and performed by the RBCS server
## Quickstart
The plugin can be built with
```bash
./gradlew rbcs-server-memcache:bundle
```
which creates a `.tar` archive in the `build/distributions` folder.
The archive is supposed to be extracted inside the RBCS server's `plugins` directory.
## Configuration
The plugin can be enabled setting the `xs:type` attribute of the `cache` element
to `memcacheCacheType`.
The plugins currently supports the following configuration attributes:
- `max-age`: the amount of time cache entries will be retained on memcache
- `digest`: digest algorithm to use on the key before submission
to memcache (optional, no digest is applied if omitted)
- `compression`: compression algorithm to apply to cache values before,
currently only `deflate` is supported (optionla, if omitted compression is disabled)
- `compression-level`: compression level to use, deflate supports compression levels from 1 to 9,
where 1 is for fast compression at the expense of speed (optional, 6 is used if omitted)
```xml
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<rbcs:server xmlns:xs="http://www.w3.org/2001/XMLSchema-instance"
xmlns:rbcs="urn:net.woggioni.rbcs.server"
xmlns:rbcs-memcache="urn:net.woggioni.rbcs.server.memcache"
xs:schemaLocation="urn:net.woggioni.rbcs.server.memcache jpms://net.woggioni.rbcs.server.memcache/net/woggioni/rbcs/server/memcache/schema/rbcs-memcache.xsd urn:net.woggioni.rbcs.server jpms://net.woggioni.rbcs.server/net/woggioni/rbcs/server/schema/rbcs.xsd"
>
...
<cache xs:type="rbcs-memcache:memcacheCacheType"
max-age="P7D"
digest="SHA-256"
compression-mode="deflate"
compression-level="6"
chunk-size="0x10000">
<server host="127.0.0.1" port="11211" max-connections="256"/>
<server host="127.0.0.1" port="11212" max-connections="256"/>
</cache>
...
```

View File

@@ -34,6 +34,7 @@ dependencies {
implementation catalog.jwo
implementation catalog.slf4j.api
implementation catalog.netty.common
implementation catalog.netty.handler
implementation catalog.netty.codec.memcache
bundle catalog.netty.codec.memcache

View File

@@ -11,6 +11,7 @@ module net.woggioni.rbcs.server.memcache {
requires io.netty.codec.memcache;
requires io.netty.common;
requires io.netty.buffer;
requires io.netty.handler;
requires org.slf4j;
provides CacheProvider with net.woggioni.rbcs.server.memcache.MemcacheCacheProvider;

View File

@@ -1,23 +0,0 @@
package net.woggioni.rbcs.server.memcache
import io.netty.buffer.ByteBuf
import net.woggioni.rbcs.api.Cache
import net.woggioni.rbcs.server.memcache.client.MemcacheClient
import java.nio.channels.ReadableByteChannel
import java.util.concurrent.CompletableFuture
class MemcacheCache(private val cfg : MemcacheCacheConfiguration) : Cache {
private val memcacheClient = MemcacheClient(cfg)
override fun get(key: String): CompletableFuture<ReadableByteChannel?> {
return memcacheClient.get(key)
}
override fun put(key: String, content: ByteBuf): CompletableFuture<Void> {
return memcacheClient.put(key, content, cfg.maxAge)
}
override fun close() {
memcacheClient.close()
}
}

View File

@@ -1,23 +1,31 @@
package net.woggioni.rbcs.server.memcache
import io.netty.channel.ChannelFactory
import io.netty.channel.ChannelHandler
import io.netty.channel.EventLoopGroup
import io.netty.channel.pool.FixedChannelPool
import io.netty.channel.socket.DatagramChannel
import io.netty.channel.socket.SocketChannel
import net.woggioni.rbcs.api.CacheHandlerFactory
import net.woggioni.rbcs.api.Configuration
import net.woggioni.rbcs.common.HostAndPort
import net.woggioni.rbcs.server.memcache.client.MemcacheClient
import java.time.Duration
import java.util.concurrent.CompletableFuture
import java.util.concurrent.ConcurrentHashMap
import java.util.concurrent.atomic.AtomicInteger
import java.util.concurrent.atomic.AtomicReference
data class MemcacheCacheConfiguration(
val servers: List<Server>,
val maxAge: Duration = Duration.ofDays(1),
val maxSize: Int = 0x100000,
val digestAlgorithm: String? = null,
val compressionMode: CompressionMode? = null,
val compressionLevel: Int,
val chunkSize: Int
) : Configuration.Cache {
enum class CompressionMode {
/**
* Gzip mode
*/
GZIP,
/**
* Deflate mode
*/
@@ -25,13 +33,59 @@ data class MemcacheCacheConfiguration(
}
data class Server(
val endpoint : HostAndPort,
val connectionTimeoutMillis : Int?,
val maxConnections : Int
val endpoint: HostAndPort,
val connectionTimeoutMillis: Int?,
val maxConnections: Int
)
override fun materialize() = object : CacheHandlerFactory {
override fun materialize() = MemcacheCache(this)
private val connectionPoolMap = ConcurrentHashMap<HostAndPort, FixedChannelPool>()
override fun newHandler(
eventLoop: EventLoopGroup,
socketChannelFactory: ChannelFactory<SocketChannel>,
datagramChannelFactory: ChannelFactory<DatagramChannel>
): ChannelHandler {
return MemcacheCacheHandler(
MemcacheClient(
this@MemcacheCacheConfiguration.servers,
chunkSize,
eventLoop,
socketChannelFactory,
connectionPoolMap
),
digestAlgorithm,
compressionMode != null,
compressionLevel,
chunkSize,
maxAge
)
}
override fun asyncClose() = object : CompletableFuture<Void>() {
init {
val failure = AtomicReference<Throwable>(null)
val pools = connectionPoolMap.values.toList()
val npools = pools.size
val finished = AtomicInteger(0)
pools.forEach { pool ->
pool.closeAsync().addListener {
if (!it.isSuccess) {
failure.compareAndSet(null, it.cause())
}
if(finished.incrementAndGet() == npools) {
when(val ex = failure.get()) {
null -> complete(null)
else -> completeExceptionally(ex)
}
}
}
}
}
}
}
override fun getNamespaceURI() = "urn:net.woggioni.rbcs.server.memcache"

View File

@@ -0,0 +1,409 @@
package net.woggioni.rbcs.server.memcache
import io.netty.buffer.ByteBuf
import io.netty.buffer.ByteBufAllocator
import io.netty.buffer.CompositeByteBuf
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.SimpleChannelInboundHandler
import io.netty.handler.codec.memcache.DefaultLastMemcacheContent
import io.netty.handler.codec.memcache.DefaultMemcacheContent
import io.netty.handler.codec.memcache.LastMemcacheContent
import io.netty.handler.codec.memcache.MemcacheContent
import io.netty.handler.codec.memcache.binary.BinaryMemcacheOpcodes
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponse
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponseStatus
import io.netty.handler.codec.memcache.binary.DefaultBinaryMemcacheRequest
import net.woggioni.rbcs.api.CacheValueMetadata
import net.woggioni.rbcs.api.exception.ContentTooLargeException
import net.woggioni.rbcs.api.message.CacheMessage
import net.woggioni.rbcs.api.message.CacheMessage.CacheContent
import net.woggioni.rbcs.api.message.CacheMessage.CacheGetRequest
import net.woggioni.rbcs.api.message.CacheMessage.CachePutRequest
import net.woggioni.rbcs.api.message.CacheMessage.CachePutResponse
import net.woggioni.rbcs.api.message.CacheMessage.CacheValueFoundResponse
import net.woggioni.rbcs.api.message.CacheMessage.CacheValueNotFoundResponse
import net.woggioni.rbcs.api.message.CacheMessage.LastCacheContent
import net.woggioni.rbcs.common.ByteBufInputStream
import net.woggioni.rbcs.common.ByteBufOutputStream
import net.woggioni.rbcs.common.RBCS.processCacheKey
import net.woggioni.rbcs.common.RBCS.toIntOrNull
import net.woggioni.rbcs.common.createLogger
import net.woggioni.rbcs.common.debug
import net.woggioni.rbcs.common.extractChunk
import net.woggioni.rbcs.common.trace
import net.woggioni.rbcs.server.memcache.client.MemcacheClient
import net.woggioni.rbcs.server.memcache.client.MemcacheRequestController
import net.woggioni.rbcs.server.memcache.client.MemcacheResponseHandler
import java.io.ByteArrayOutputStream
import java.io.ObjectInputStream
import java.io.ObjectOutputStream
import java.nio.ByteBuffer
import java.nio.channels.Channels
import java.nio.channels.FileChannel
import java.nio.channels.ReadableByteChannel
import java.nio.file.Files
import java.nio.file.StandardOpenOption
import java.time.Duration
import java.time.Instant
import java.util.concurrent.CompletableFuture
import java.util.zip.Deflater
import java.util.zip.DeflaterOutputStream
import java.util.zip.InflaterOutputStream
import io.netty.channel.Channel as NettyChannel
class MemcacheCacheHandler(
private val client: MemcacheClient,
private val digestAlgorithm: String?,
private val compressionEnabled: Boolean,
private val compressionLevel: Int,
private val chunkSize: Int,
private val maxAge: Duration
) : SimpleChannelInboundHandler<CacheMessage>() {
companion object {
private val log = createLogger<MemcacheCacheHandler>()
private fun encodeExpiry(expiry: Duration): Int {
val expirySeconds = expiry.toSeconds()
return expirySeconds.toInt().takeIf { it.toLong() == expirySeconds }
?: Instant.ofEpochSecond(expirySeconds).epochSecond.toInt()
}
}
private inner class InProgressGetRequest(
private val key: String,
private val ctx: ChannelHandlerContext
) {
private val acc = ctx.alloc().compositeBuffer()
private val chunk = ctx.alloc().compositeBuffer()
private val outputStream = ByteBufOutputStream(chunk).let {
if (compressionEnabled) {
InflaterOutputStream(it)
} else {
it
}
}
private var responseSent = false
private var metadataSize: Int? = null
fun write(buf: ByteBuf) {
acc.addComponent(true, buf.retain())
if (metadataSize == null && acc.readableBytes() >= Int.SIZE_BYTES) {
metadataSize = acc.readInt()
}
metadataSize
?.takeIf { !responseSent }
?.takeIf { acc.readableBytes() >= it }
?.let { mSize ->
val metadata = ObjectInputStream(ByteBufInputStream(acc)).use {
acc.retain()
it.readObject() as CacheValueMetadata
}
ctx.writeAndFlush(CacheValueFoundResponse(key, metadata))
responseSent = true
acc.readerIndex(Int.SIZE_BYTES + mSize)
}
if (responseSent) {
acc.readBytes(outputStream, acc.readableBytes())
if(acc.readableBytes() >= chunkSize) {
flush(false)
}
}
}
private fun flush(last : Boolean) {
val toSend = extractChunk(chunk, ctx.alloc())
val msg = if(last) {
log.trace(ctx) {
"Sending last chunk to client on channel ${ctx.channel().id().asShortText()}"
}
LastCacheContent(toSend)
} else {
log.trace(ctx) {
"Sending chunk to client on channel ${ctx.channel().id().asShortText()}"
}
CacheContent(toSend)
}
ctx.writeAndFlush(msg)
}
fun commit() {
acc.release()
chunk.retain()
outputStream.close()
flush(true)
chunk.release()
}
fun rollback() {
acc.release()
outputStream.close()
}
}
private inner class InProgressPutRequest(
private val ch : NettyChannel,
metadata : CacheValueMetadata,
val digest : ByteBuf,
val requestController: CompletableFuture<MemcacheRequestController>,
private val alloc: ByteBufAllocator
) {
private var totalSize = 0
private var tmpFile : FileChannel? = null
private val accumulator = alloc.compositeBuffer()
private val stream = ByteBufOutputStream(accumulator).let {
if (compressionEnabled) {
DeflaterOutputStream(it, Deflater(compressionLevel))
} else {
it
}
}
init {
ByteArrayOutputStream().let { baos ->
ObjectOutputStream(baos).use {
it.writeObject(metadata)
}
val serializedBytes = baos.toByteArray()
accumulator.writeInt(serializedBytes.size)
accumulator.writeBytes(serializedBytes)
}
}
fun write(buf: ByteBuf) {
totalSize += buf.readableBytes()
buf.readBytes(stream, buf.readableBytes())
tmpFile?.let {
flushToDisk(it, accumulator)
}
if(accumulator.readableBytes() > 0x100000) {
log.debug(ch) {
"Entry is too big, buffering it into a file"
}
val opts = arrayOf(
StandardOpenOption.DELETE_ON_CLOSE,
StandardOpenOption.READ,
StandardOpenOption.WRITE,
StandardOpenOption.TRUNCATE_EXISTING
)
FileChannel.open(Files.createTempFile("rbcs-memcache", ".tmp"), *opts).let { fc ->
tmpFile = fc
flushToDisk(fc, accumulator)
}
}
}
private fun flushToDisk(fc : FileChannel, buf : CompositeByteBuf) {
val chunk = extractChunk(buf, alloc)
fc.write(chunk.nioBuffer())
chunk.release()
}
fun commit() : Pair<Int, ReadableByteChannel> {
digest.release()
accumulator.retain()
stream.close()
val fileChannel = tmpFile
return if(fileChannel != null) {
flushToDisk(fileChannel, accumulator)
accumulator.release()
fileChannel.position(0)
val fileSize = fileChannel.size().toIntOrNull() ?: let {
fileChannel.close()
throw ContentTooLargeException("Request body is too large", null)
}
fileSize to fileChannel
} else {
accumulator.readableBytes() to Channels.newChannel(ByteBufInputStream(accumulator))
}
}
fun rollback() {
stream.close()
digest.release()
tmpFile?.close()
}
}
private var inProgressPutRequest: InProgressPutRequest? = null
private var inProgressGetRequest: InProgressGetRequest? = null
override fun channelRead0(ctx: ChannelHandlerContext, msg: CacheMessage) {
when (msg) {
is CacheGetRequest -> handleGetRequest(ctx, msg)
is CachePutRequest -> handlePutRequest(ctx, msg)
is LastCacheContent -> handleLastCacheContent(ctx, msg)
is CacheContent -> handleCacheContent(ctx, msg)
else -> ctx.fireChannelRead(msg)
}
}
private fun handleGetRequest(ctx: ChannelHandlerContext, msg: CacheGetRequest) {
log.debug(ctx) {
"Fetching ${msg.key} from memcache"
}
val key = ctx.alloc().buffer().also {
it.writeBytes(processCacheKey(msg.key, digestAlgorithm))
}
val responseHandler = object : MemcacheResponseHandler {
override fun responseReceived(response: BinaryMemcacheResponse) {
val status = response.status()
when (status) {
BinaryMemcacheResponseStatus.SUCCESS -> {
log.debug(ctx) {
"Cache hit for key ${msg.key} on memcache"
}
inProgressGetRequest = InProgressGetRequest(msg.key, ctx)
}
BinaryMemcacheResponseStatus.KEY_ENOENT -> {
log.debug(ctx) {
"Cache miss for key ${msg.key} on memcache"
}
ctx.writeAndFlush(CacheValueNotFoundResponse())
}
}
}
override fun contentReceived(content: MemcacheContent) {
log.trace(ctx) {
"${if(content is LastMemcacheContent) "Last chunk" else "Chunk"} of ${content.content().readableBytes()} bytes received from memcache for key ${msg.key}"
}
inProgressGetRequest?.write(content.content())
if (content is LastMemcacheContent) {
inProgressGetRequest?.commit()
}
}
override fun exceptionCaught(ex: Throwable) {
inProgressGetRequest?.let {
inProgressGetRequest = null
it.rollback()
}
this@MemcacheCacheHandler.exceptionCaught(ctx, ex)
}
}
client.sendRequest(key.retainedDuplicate(), responseHandler).thenAccept { requestHandle ->
log.trace(ctx) {
"Sending GET request for key ${msg.key} to memcache"
}
val request = DefaultBinaryMemcacheRequest(key).apply {
setOpcode(BinaryMemcacheOpcodes.GET)
}
requestHandle.sendRequest(request)
}
}
private fun handlePutRequest(ctx: ChannelHandlerContext, msg: CachePutRequest) {
val key = ctx.alloc().buffer().also {
it.writeBytes(processCacheKey(msg.key, digestAlgorithm))
}
val responseHandler = object : MemcacheResponseHandler {
override fun responseReceived(response: BinaryMemcacheResponse) {
val status = response.status()
when (status) {
BinaryMemcacheResponseStatus.SUCCESS -> {
log.debug(ctx) {
"Inserted key ${msg.key} into memcache"
}
ctx.writeAndFlush(CachePutResponse(msg.key))
}
else -> this@MemcacheCacheHandler.exceptionCaught(ctx, MemcacheException(status))
}
}
override fun contentReceived(content: MemcacheContent) {}
override fun exceptionCaught(ex: Throwable) {
this@MemcacheCacheHandler.exceptionCaught(ctx, ex)
}
}
val requestController = client.sendRequest(key.retainedDuplicate(), responseHandler).whenComplete { _, ex ->
ex?.let {
this@MemcacheCacheHandler.exceptionCaught(ctx, ex)
}
}
inProgressPutRequest = InProgressPutRequest(ctx.channel(), msg.metadata, key, requestController, ctx.alloc())
}
private fun handleCacheContent(ctx: ChannelHandlerContext, msg: CacheContent) {
inProgressPutRequest?.let { request ->
log.trace(ctx) {
"Received chunk of ${msg.content().readableBytes()} bytes for memcache"
}
request.write(msg.content())
}
}
private fun handleLastCacheContent(ctx: ChannelHandlerContext, msg: LastCacheContent) {
inProgressPutRequest?.let { request ->
inProgressPutRequest = null
log.trace(ctx) {
"Received last chunk of ${msg.content().readableBytes()} bytes for memcache"
}
request.write(msg.content())
val key = request.digest.retainedDuplicate()
val (payloadSize, payloadSource) = request.commit()
val extras = ctx.alloc().buffer(8, 8)
extras.writeInt(0)
extras.writeInt(encodeExpiry(maxAge))
val totalBodyLength = request.digest.readableBytes() + extras.readableBytes() + payloadSize
request.requestController.whenComplete { requestController, ex ->
if(ex == null) {
log.trace(ctx) {
"Sending SET request to memcache"
}
requestController.sendRequest(DefaultBinaryMemcacheRequest().apply {
setOpcode(BinaryMemcacheOpcodes.SET)
setKey(key)
setExtras(extras)
setTotalBodyLength(totalBodyLength)
})
log.trace(ctx) {
"Sending request payload to memcache"
}
payloadSource.use { source ->
val bb = ByteBuffer.allocate(chunkSize)
while (true) {
val read = source.read(bb)
bb.limit()
if(read >= 0 && bb.position() < chunkSize && bb.hasRemaining()) {
continue
}
val chunk = ctx.alloc().buffer(chunkSize)
bb.flip()
chunk.writeBytes(bb)
bb.clear()
log.trace(ctx) {
"Sending ${chunk.readableBytes()} bytes chunk to memcache"
}
if(read < 0) {
requestController.sendContent(DefaultLastMemcacheContent(chunk))
break
} else {
requestController.sendContent(DefaultMemcacheContent(chunk))
}
}
}
} else {
payloadSource.close()
}
}
}
}
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
inProgressGetRequest?.let {
inProgressGetRequest = null
it.rollback()
}
inProgressPutRequest?.let {
inProgressPutRequest = null
it.requestController.thenAccept { controller ->
controller.exceptionCaught(cause)
}
it.rollback()
}
super.exceptionCaught(ctx, cause)
}
}

View File

@@ -2,8 +2,8 @@ package net.woggioni.rbcs.server.memcache
import net.woggioni.rbcs.api.CacheProvider
import net.woggioni.rbcs.api.exception.ConfigurationException
import net.woggioni.rbcs.common.RBCS
import net.woggioni.rbcs.common.HostAndPort
import net.woggioni.rbcs.common.RBCS
import net.woggioni.rbcs.common.Xml
import net.woggioni.rbcs.common.Xml.Companion.asIterable
import net.woggioni.rbcs.common.Xml.Companion.renderAttribute
@@ -28,18 +28,19 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
val maxAge = el.renderAttribute("max-age")
?.let(Duration::parse)
?: Duration.ofDays(1)
val maxSize = el.renderAttribute("max-size")
?.let(String::toInt)
?: 0x100000
val chunkSize = el.renderAttribute("chunk-size")
?.let(Integer::decode)
?: 0x10000
val compressionLevel = el.renderAttribute("compression-level")
?.let(Integer::decode)
?: -1
val compressionMode = el.renderAttribute("compression-mode")
?.let {
when (it) {
"gzip" -> MemcacheCacheConfiguration.CompressionMode.GZIP
"deflate" -> MemcacheCacheConfiguration.CompressionMode.DEFLATE
else -> MemcacheCacheConfiguration.CompressionMode.DEFLATE
}
}
?: MemcacheCacheConfiguration.CompressionMode.DEFLATE
val digestAlgorithm = el.renderAttribute("digest")
for (child in el.asIterable()) {
when (child.nodeName) {
@@ -60,9 +61,10 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
return MemcacheCacheConfiguration(
servers,
maxAge,
maxSize,
digestAlgorithm,
compressionMode,
compressionLevel,
chunkSize
)
}
@@ -70,7 +72,6 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
val result = doc.createElement("cache")
Xml.of(doc, result) {
attr("xmlns:${xmlNamespacePrefix}", xmlNamespace, namespaceURI = "http://www.w3.org/2000/xmlns/")
attr("xs:type", "${xmlNamespacePrefix}:$xmlType", RBCS.XML_SCHEMA_NAMESPACE_URI)
for (server in servers) {
node("server") {
@@ -83,18 +84,18 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
}
}
attr("max-age", maxAge.toString())
attr("max-size", maxSize.toString())
attr("chunk-size", chunkSize.toString())
digestAlgorithm?.let { digestAlgorithm ->
attr("digest", digestAlgorithm)
}
compressionMode?.let { compressionMode ->
attr(
"compression-mode", when (compressionMode) {
MemcacheCacheConfiguration.CompressionMode.GZIP -> "gzip"
MemcacheCacheConfiguration.CompressionMode.DEFLATE -> "deflate"
}
)
}
attr("compression-level", compressionLevel.toString())
}
result
}

View File

@@ -3,68 +3,53 @@ package net.woggioni.rbcs.server.memcache.client
import io.netty.bootstrap.Bootstrap
import io.netty.buffer.ByteBuf
import io.netty.buffer.Unpooled
import io.netty.channel.Channel
import io.netty.channel.ChannelFactory
import io.netty.channel.ChannelFutureListener
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.ChannelOption
import io.netty.channel.ChannelPipeline
import io.netty.channel.EventLoopGroup
import io.netty.channel.SimpleChannelInboundHandler
import io.netty.channel.nio.NioEventLoopGroup
import io.netty.channel.pool.AbstractChannelPoolHandler
import io.netty.channel.pool.ChannelPool
import io.netty.channel.pool.FixedChannelPool
import io.netty.channel.socket.nio.NioSocketChannel
import io.netty.handler.codec.DecoderException
import io.netty.channel.socket.SocketChannel
import io.netty.handler.codec.memcache.LastMemcacheContent
import io.netty.handler.codec.memcache.MemcacheContent
import io.netty.handler.codec.memcache.MemcacheObject
import io.netty.handler.codec.memcache.binary.BinaryMemcacheClientCodec
import io.netty.handler.codec.memcache.binary.BinaryMemcacheObjectAggregator
import io.netty.handler.codec.memcache.binary.BinaryMemcacheOpcodes
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponseStatus
import io.netty.handler.codec.memcache.binary.DefaultFullBinaryMemcacheRequest
import io.netty.handler.codec.memcache.binary.FullBinaryMemcacheRequest
import io.netty.handler.codec.memcache.binary.FullBinaryMemcacheResponse
import io.netty.handler.codec.memcache.binary.BinaryMemcacheRequest
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponse
import io.netty.util.concurrent.GenericFutureListener
import net.woggioni.rbcs.common.ByteBufInputStream
import net.woggioni.rbcs.common.ByteBufOutputStream
import net.woggioni.rbcs.common.RBCS.digest
import net.woggioni.rbcs.common.HostAndPort
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.createLogger
import net.woggioni.rbcs.common.warn
import net.woggioni.rbcs.server.memcache.MemcacheCacheConfiguration
import net.woggioni.rbcs.server.memcache.MemcacheException
import net.woggioni.jwo.JWO
import net.woggioni.rbcs.server.memcache.MemcacheCacheHandler
import java.io.IOException
import java.net.InetSocketAddress
import java.nio.channels.Channels
import java.nio.channels.ReadableByteChannel
import java.security.MessageDigest
import java.time.Duration
import java.time.Instant
import java.util.concurrent.CompletableFuture
import java.util.concurrent.ConcurrentHashMap
import java.util.zip.Deflater
import java.util.zip.DeflaterOutputStream
import java.util.zip.GZIPInputStream
import java.util.zip.GZIPOutputStream
import java.util.zip.InflaterInputStream
import io.netty.util.concurrent.Future as NettyFuture
class MemcacheClient(private val cfg: MemcacheCacheConfiguration) : AutoCloseable {
class MemcacheClient(
private val servers: List<MemcacheCacheConfiguration.Server>,
private val chunkSize : Int,
private val group: EventLoopGroup,
private val channelFactory: ChannelFactory<SocketChannel>,
private val connectionPool: ConcurrentHashMap<HostAndPort, FixedChannelPool>
) : AutoCloseable {
private companion object {
@JvmStatic
private val log = contextLogger()
}
private val group: NioEventLoopGroup
private val connectionPool: MutableMap<HostAndPort, ChannelPool> = ConcurrentHashMap()
init {
group = NioEventLoopGroup()
private val log = createLogger<MemcacheCacheHandler>()
}
private fun newConnectionPool(server: MemcacheCacheConfiguration.Server): FixedChannelPool {
val bootstrap = Bootstrap().apply {
group(group)
channel(NioSocketChannel::class.java)
channelFactory(channelFactory)
option(ChannelOption.SO_KEEPALIVE, true)
remoteAddress(InetSocketAddress(server.endpoint.host, server.endpoint.port))
server.connectionTimeoutMillis?.let {
@@ -75,19 +60,17 @@ class MemcacheClient(private val cfg: MemcacheCacheConfiguration) : AutoCloseabl
override fun channelCreated(ch: Channel) {
val pipeline: ChannelPipeline = ch.pipeline()
pipeline.addLast(BinaryMemcacheClientCodec())
pipeline.addLast(BinaryMemcacheObjectAggregator(cfg.maxSize))
pipeline.addLast(BinaryMemcacheClientCodec(chunkSize, true))
}
}
return FixedChannelPool(bootstrap, channelPoolHandler, server.maxConnections)
}
private fun sendRequest(request: FullBinaryMemcacheRequest): CompletableFuture<FullBinaryMemcacheResponse> {
val server = cfg.servers.let { servers ->
if (servers.size > 1) {
val key = request.key().duplicate()
fun sendRequest(
key: ByteBuf,
responseHandler: MemcacheResponseHandler
): CompletableFuture<MemcacheRequestController> {
val server = if (servers.size > 1) {
var checksum = 0
while (key.readableBytes() > 4) {
val byte = key.readInt()
@@ -101,9 +84,9 @@ class MemcacheClient(private val cfg: MemcacheCacheConfiguration) : AutoCloseabl
} else {
servers.first()
}
}
key.release()
val response = CompletableFuture<FullBinaryMemcacheResponse>()
val response = CompletableFuture<MemcacheRequestController>()
// Custom handler for processing responses
val pool = connectionPool.computeIfAbsent(server.endpoint) {
newConnectionPool(server)
@@ -111,33 +94,108 @@ class MemcacheClient(private val cfg: MemcacheCacheConfiguration) : AutoCloseabl
pool.acquire().addListener(object : GenericFutureListener<NettyFuture<Channel>> {
override fun operationComplete(channelFuture: NettyFuture<Channel>) {
if (channelFuture.isSuccess) {
var requestSent = false
var requestBodySent = false
var requestFinished = false
var responseReceived = false
var responseBodyReceived = false
var responseFinished = false
var requestBodySize = 0
var requestBodyBytesSent = 0
val channel = channelFuture.now
var connectionClosedByTheRemoteServer = true
val closeCallback = {
if (connectionClosedByTheRemoteServer) {
val ex = IOException("The memcache server closed the connection")
val completed = response.completeExceptionally(ex)
if(!completed) responseHandler.exceptionCaught(ex)
log.warn {
"RequestSent: $requestSent, RequestBodySent: $requestBodySent, " +
"RequestFinished: $requestFinished, ResponseReceived: $responseReceived, " +
"ResponseBodyReceived: $responseBodyReceived, ResponseFinished: $responseFinished, " +
"RequestBodySize: $requestBodySize, RequestBodyBytesSent: $requestBodyBytesSent"
}
}
pool.release(channel)
}
val closeListener = ChannelFutureListener {
closeCallback()
}
channel.closeFuture().addListener(closeListener)
val pipeline = channel.pipeline()
channel.pipeline()
.addLast("client-handler", object : SimpleChannelInboundHandler<FullBinaryMemcacheResponse>() {
val handler = object : SimpleChannelInboundHandler<MemcacheObject>() {
override fun handlerAdded(ctx: ChannelHandlerContext) {
channel.closeFuture().removeListener(closeListener)
}
override fun channelRead0(
ctx: ChannelHandlerContext,
msg: FullBinaryMemcacheResponse
msg: MemcacheObject
) {
pipeline.removeLast()
when (msg) {
is BinaryMemcacheResponse -> {
responseHandler.responseReceived(msg)
responseReceived = true
}
is LastMemcacheContent -> {
responseFinished = true
responseHandler.contentReceived(msg)
pipeline.remove(this)
pool.release(channel)
msg.touch("The method's caller must remember to release this")
response.complete(msg.retain())
}
is MemcacheContent -> {
responseBodyReceived = true
responseHandler.contentReceived(msg)
}
}
}
override fun channelInactive(ctx: ChannelHandlerContext) {
closeCallback()
ctx.fireChannelInactive()
}
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
val ex = when (cause) {
is DecoderException -> cause.cause!!
else -> cause
}
connectionClosedByTheRemoteServer = false
ctx.close()
pipeline.removeLast()
pool.release(channel)
response.completeExceptionally(ex)
responseHandler.exceptionCaught(cause)
}
}
channel.pipeline()
.addLast("client-handler", handler)
response.complete(object : MemcacheRequestController {
override fun sendRequest(request: BinaryMemcacheRequest) {
requestBodySize = request.totalBodyLength() - request.keyLength() - request.extrasLength()
channel.writeAndFlush(request)
requestSent = true
}
override fun sendContent(content: MemcacheContent) {
val size = content.content().readableBytes()
channel.writeAndFlush(content).addListener {
requestBodyBytesSent += size
requestBodySent = true
if(content is LastMemcacheContent) {
requestFinished = true
}
}
}
override fun exceptionCaught(ex: Throwable) {
connectionClosedByTheRemoteServer = false
channel.close()
}
})
request.touch()
channel.writeAndFlush(request)
} else {
response.completeExceptionally(channelFuture.cause())
}
@@ -146,107 +204,6 @@ class MemcacheClient(private val cfg: MemcacheCacheConfiguration) : AutoCloseabl
return response
}
private fun encodeExpiry(expiry: Duration): Int {
val expirySeconds = expiry.toSeconds()
return expirySeconds.toInt().takeIf { it.toLong() == expirySeconds }
?: Instant.ofEpochSecond(expirySeconds).epochSecond.toInt()
}
fun get(key: String): CompletableFuture<ReadableByteChannel?> {
val request = (cfg.digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digest(key.toByteArray(), md)
} ?: key.toByteArray(Charsets.UTF_8)).let { digest ->
DefaultFullBinaryMemcacheRequest(Unpooled.wrappedBuffer(digest), null).apply {
setOpcode(BinaryMemcacheOpcodes.GET)
}
}
return sendRequest(request).thenApply { response ->
try {
when (val status = response.status()) {
BinaryMemcacheResponseStatus.SUCCESS -> {
val compressionMode = cfg.compressionMode
val content = response.content().retain()
content.touch()
if (compressionMode != null) {
when (compressionMode) {
MemcacheCacheConfiguration.CompressionMode.GZIP -> {
GZIPInputStream(ByteBufInputStream(content))
}
MemcacheCacheConfiguration.CompressionMode.DEFLATE -> {
InflaterInputStream(ByteBufInputStream(content))
}
}
} else {
ByteBufInputStream(content)
}.let(Channels::newChannel)
}
BinaryMemcacheResponseStatus.KEY_ENOENT -> {
null
}
else -> throw MemcacheException(status)
}
} finally {
response.release()
}
}
}
fun put(key: String, content: ByteBuf, expiry: Duration, cas: Long? = null): CompletableFuture<Void> {
val request = (cfg.digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digest(key.toByteArray(), md)
} ?: key.toByteArray(Charsets.UTF_8)).let { digest ->
val extras = Unpooled.buffer(8, 8)
extras.writeInt(0)
extras.writeInt(encodeExpiry(expiry))
val compressionMode = cfg.compressionMode
content.retain()
val payload = if (compressionMode != null) {
val inputStream = ByteBufInputStream(content)
val buf = content.alloc().buffer()
buf.retain()
val outputStream = when (compressionMode) {
MemcacheCacheConfiguration.CompressionMode.GZIP -> {
GZIPOutputStream(ByteBufOutputStream(buf))
}
MemcacheCacheConfiguration.CompressionMode.DEFLATE -> {
DeflaterOutputStream(ByteBufOutputStream(buf), Deflater(Deflater.DEFAULT_COMPRESSION, false))
}
}
inputStream.use { i ->
outputStream.use { o ->
JWO.copy(i, o)
}
}
buf
} else {
content
}
DefaultFullBinaryMemcacheRequest(Unpooled.wrappedBuffer(digest), extras, payload).apply {
setOpcode(BinaryMemcacheOpcodes.SET)
cas?.let(this::setCas)
}
}
return sendRequest(request).thenApply { response ->
try {
when (val status = response.status()) {
BinaryMemcacheResponseStatus.SUCCESS -> null
else -> throw MemcacheException(status)
}
} finally {
response.release()
}
}
}
fun shutDown(): NettyFuture<*> {
return group.shutdownGracefully()
}

View File

@@ -0,0 +1,13 @@
package net.woggioni.rbcs.server.memcache.client
import io.netty.handler.codec.memcache.MemcacheContent
import io.netty.handler.codec.memcache.binary.BinaryMemcacheRequest
interface MemcacheRequestController {
fun sendRequest(request : BinaryMemcacheRequest)
fun sendContent(content : MemcacheContent)
fun exceptionCaught(ex : Throwable)
}

View File

@@ -0,0 +1,14 @@
package net.woggioni.rbcs.server.memcache.client
import io.netty.handler.codec.memcache.MemcacheContent
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponse
interface MemcacheResponseHandler {
fun responseReceived(response : BinaryMemcacheResponse)
fun contentReceived(content : MemcacheContent)
fun exceptionCaught(ex : Throwable)
}

View File

@@ -20,9 +20,10 @@
<xs:element name="server" type="rbcs-memcache:memcacheServerType"/>
</xs:sequence>
<xs:attribute name="max-age" type="xs:duration" default="P1D"/>
<xs:attribute name="max-size" type="xs:unsignedInt" default="1048576"/>
<xs:attribute name="digest" type="xs:token" />
<xs:attribute name="chunk-size" type="rbcs:byteSizeType" default="0x10000"/>
<xs:attribute name="digest" type="xs:token"/>
<xs:attribute name="compression-mode" type="rbcs-memcache:compressionType"/>
<xs:attribute name="compression-level" type="rbcs:compressionLevelType" default="-1"/>
</xs:extension>
</xs:complexContent>
</xs:complexType>
@@ -30,7 +31,6 @@
<xs:simpleType name="compressionType">
<xs:restriction base="xs:token">
<xs:enumeration value="deflate"/>
<xs:enumeration value="gzip"/>
</xs:restriction>
</xs:simpleType>

View File

@@ -0,0 +1,27 @@
package net.woggioni.rbcs.server.memcache.client
import io.netty.buffer.ByteBufUtil
import io.netty.buffer.Unpooled
import org.junit.jupiter.api.Assertions
import org.junit.jupiter.api.Test
import java.io.ByteArrayInputStream
import java.nio.ByteBuffer
import java.nio.channels.Channels
import kotlin.random.Random
class ByteBufferTest {
@Test
fun test() {
val byteBuffer = ByteBuffer.allocate(0x100)
val originalBytes = Random(101325).nextBytes(0x100)
Channels.newChannel(ByteArrayInputStream(originalBytes)).use { source ->
source.read(byteBuffer)
}
byteBuffer.flip()
val buf = Unpooled.buffer()
buf.writeBytes(byteBuffer)
val finalBytes = ByteBufUtil.getBytes(buf)
Assertions.assertArrayEquals(originalBytes, finalBytes)
}
}

View File

@@ -1,30 +0,0 @@
package net.woggioni.rbcs.server
import io.netty.channel.ChannelHandlerContext
import org.slf4j.Logger
import java.net.InetSocketAddress
inline fun Logger.trace(ctx : ChannelHandlerContext, messageBuilder : () -> String) {
log(this, ctx, { isTraceEnabled }, { trace(it) } , messageBuilder)
}
inline fun Logger.debug(ctx : ChannelHandlerContext, messageBuilder : () -> String) {
log(this, ctx, { isDebugEnabled }, { debug(it) } , messageBuilder)
}
inline fun Logger.info(ctx : ChannelHandlerContext, messageBuilder : () -> String) {
log(this, ctx, { isInfoEnabled }, { info(it) } , messageBuilder)
}
inline fun Logger.warn(ctx : ChannelHandlerContext, messageBuilder : () -> String) {
log(this, ctx, { isWarnEnabled }, { warn(it) } , messageBuilder)
}
inline fun Logger.error(ctx : ChannelHandlerContext, messageBuilder : () -> String) {
log(this, ctx, { isErrorEnabled }, { error(it) } , messageBuilder)
}
inline fun log(log : Logger, ctx : ChannelHandlerContext,
filter : Logger.() -> Boolean,
loggerMethod : Logger.(String) -> Unit, messageBuilder : () -> String) {
if(log.filter()) {
val clientAddress = (ctx.channel().remoteAddress() as InetSocketAddress).address.hostAddress
log.loggerMethod(clientAddress + " - " + messageBuilder())
}
}

View File

@@ -3,6 +3,7 @@ package net.woggioni.rbcs.server
import io.netty.bootstrap.ServerBootstrap
import io.netty.buffer.ByteBuf
import io.netty.channel.Channel
import io.netty.channel.ChannelFactory
import io.netty.channel.ChannelFuture
import io.netty.channel.ChannelHandler.Sharable
import io.netty.channel.ChannelHandlerContext
@@ -11,12 +12,16 @@ import io.netty.channel.ChannelInitializer
import io.netty.channel.ChannelOption
import io.netty.channel.ChannelPromise
import io.netty.channel.nio.NioEventLoopGroup
import io.netty.channel.socket.DatagramChannel
import io.netty.channel.socket.ServerSocketChannel
import io.netty.channel.socket.SocketChannel
import io.netty.channel.socket.nio.NioDatagramChannel
import io.netty.channel.socket.nio.NioServerSocketChannel
import io.netty.channel.socket.nio.NioSocketChannel
import io.netty.handler.codec.compression.CompressionOptions
import io.netty.handler.codec.http.DefaultHttpContent
import io.netty.handler.codec.http.HttpContentCompressor
import io.netty.handler.codec.http.HttpHeaderNames
import io.netty.handler.codec.http.HttpObjectAggregator
import io.netty.handler.codec.http.HttpRequest
import io.netty.handler.codec.http.HttpServerCodec
import io.netty.handler.ssl.ClientAuth
@@ -30,51 +35,57 @@ import io.netty.handler.timeout.IdleStateHandler
import io.netty.util.AttributeKey
import io.netty.util.concurrent.DefaultEventExecutorGroup
import io.netty.util.concurrent.EventExecutorGroup
import net.woggioni.jwo.JWO
import net.woggioni.jwo.Tuple2
import net.woggioni.rbcs.api.AsyncCloseable
import net.woggioni.rbcs.api.Configuration
import net.woggioni.rbcs.api.exception.ConfigurationException
import net.woggioni.rbcs.common.PasswordSecurity.decodePasswordHash
import net.woggioni.rbcs.common.PasswordSecurity.hashPassword
import net.woggioni.rbcs.common.RBCS.getTrustManager
import net.woggioni.rbcs.common.RBCS.loadKeystore
import net.woggioni.rbcs.common.RBCS.toUrl
import net.woggioni.rbcs.common.Xml
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.createLogger
import net.woggioni.rbcs.common.debug
import net.woggioni.rbcs.common.info
import net.woggioni.rbcs.server.auth.AbstractNettyHttpAuthenticator
import net.woggioni.rbcs.server.auth.Authorizer
import net.woggioni.rbcs.server.auth.ClientCertificateValidator
import net.woggioni.rbcs.server.auth.RoleAuthorizer
import net.woggioni.rbcs.server.configuration.Parser
import net.woggioni.rbcs.server.configuration.Serializer
import net.woggioni.rbcs.server.exception.ExceptionHandler
import net.woggioni.rbcs.server.handler.MaxRequestSizeHandler
import net.woggioni.rbcs.server.handler.ServerHandler
import net.woggioni.rbcs.server.handler.TraceHandler
import net.woggioni.rbcs.server.throttling.BucketManager
import net.woggioni.rbcs.server.throttling.ThrottlingHandler
import java.io.OutputStream
import java.net.InetSocketAddress
import java.nio.file.Files
import java.nio.file.Path
import java.security.KeyStore
import java.security.PrivateKey
import java.security.cert.X509Certificate
import java.time.Duration
import java.time.Instant
import java.util.Arrays
import java.util.Base64
import java.util.concurrent.CompletableFuture
import java.util.concurrent.Future
import java.util.concurrent.TimeUnit
import java.util.concurrent.TimeoutException
import java.util.regex.Matcher
import java.util.regex.Pattern
import javax.naming.ldap.LdapName
import javax.net.ssl.SSLPeerUnverifiedException
class RemoteBuildCacheServer(private val cfg: Configuration) {
private val log = contextLogger()
companion object {
private val log = createLogger<RemoteBuildCacheServer>()
val userAttribute: AttributeKey<Configuration.User> = AttributeKey.valueOf("user")
val groupAttribute: AttributeKey<Set<Configuration.Group>> = AttributeKey.valueOf("group")
val DEFAULT_CONFIGURATION_URL by lazy { "classpath:net/woggioni/rbcs/server/rbcs-default.xml".toUrl() }
val DEFAULT_CONFIGURATION_URL by lazy { "jpms://net.woggioni.rbcs.server/net/woggioni/rbcs/server/rbcs-default.xml".toUrl() }
private const val SSL_HANDLER_NAME = "sslHandler"
fun loadConfiguration(configurationFile: Path): Configuration {
@@ -143,7 +154,9 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
private class NettyHttpBasicAuthenticator(
private val users: Map<String, Configuration.User>, authorizer: Authorizer
) : AbstractNettyHttpAuthenticator(authorizer) {
private val log = contextLogger()
companion object {
private val log = createLogger<NettyHttpBasicAuthenticator>()
}
override fun authenticate(ctx: ChannelHandlerContext, req: HttpRequest): AuthenticationResult? {
val authorizationHeader = req.headers()[HttpHeaderNames.AUTHORIZATION] ?: let {
@@ -192,8 +205,10 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
private class ServerInitializer(
private val cfg: Configuration,
private val channelFactory : ChannelFactory<SocketChannel>,
private val datagramChannelFactory : ChannelFactory<DatagramChannel>,
private val eventExecutorGroup: EventExecutorGroup
) : ChannelInitializer<Channel>(), AutoCloseable {
) : ChannelInitializer<Channel>(), AsyncCloseable {
companion object {
private fun createSslCtx(tls: Configuration.Tls): SslContext {
@@ -213,7 +228,7 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
val clientAuth = tls.trustStore?.let { trustStore ->
val ts = loadKeystore(trustStore.file, trustStore.password)
trustManager(
ClientCertificateValidator.getTrustManager(ts, trustStore.isCheckCertificateStatus)
getTrustManager(ts, trustStore.isCheckCertificateStatus)
)
if (trustStore.isRequireClientCertificate) ClientAuth.REQUIRE
else ClientAuth.OPTIONAL
@@ -223,39 +238,12 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
}
}
fun loadKeystore(file: Path, password: String?): KeyStore {
val ext = JWO.splitExtension(file)
.map(Tuple2<String, String>::get_2)
.orElseThrow {
IllegalArgumentException(
"Keystore file '${file}' must have .jks, .p12, .pfx extension"
)
}
val keystore = when (ext.substring(1).lowercase()) {
"jks" -> KeyStore.getInstance("JKS")
"p12", "pfx" -> KeyStore.getInstance("PKCS12")
else -> throw IllegalArgumentException(
"Keystore file '${file}' must have .jks, .p12, .pfx extension"
)
}
Files.newInputStream(file).use {
keystore.load(it, password?.let(String::toCharArray))
}
return keystore
}
private val log = createLogger<ServerInitializer>()
}
private val log = contextLogger()
private val cacheHandlerFactory = cfg.cache.materialize()
private val cache = cfg.cache.materialize()
private val serverHandler = let {
val prefix = Path.of("/").resolve(Path.of(cfg.serverPath ?: "/"))
ServerHandler(cache, prefix)
}
private val exceptionHandler = ExceptionHandler()
private val throttlingHandler = ThrottlingHandler(cfg)
private val bucketManager = BucketManager.from(cfg)
private val authenticator = when (val auth = cfg.authentication) {
is Configuration.BasicAuthentication -> NettyHttpBasicAuthenticator(cfg.users, RoleAuthorizer())
@@ -312,19 +300,6 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
}
val pipeline = ch.pipeline()
cfg.connection.also { conn ->
val readTimeout = conn.readTimeout.toMillis()
val writeTimeout = conn.writeTimeout.toMillis()
if (readTimeout > 0 || writeTimeout > 0) {
pipeline.addLast(
IdleStateHandler(
false,
readTimeout,
writeTimeout,
0,
TimeUnit.MILLISECONDS
)
)
}
val readIdleTimeout = conn.readIdleTimeout.toMillis()
val writeIdleTimeout = conn.writeIdleTimeout.toMillis()
val idleTimeout = conn.idleTimeout.toMillis()
@@ -366,63 +341,111 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
pipeline.addLast(SSL_HANDLER_NAME, it)
}
pipeline.addLast(HttpServerCodec())
pipeline.addLast(MaxRequestSizeHandler.NAME, MaxRequestSizeHandler(cfg.connection.maxRequestSize))
pipeline.addLast(HttpChunkContentCompressor(1024))
pipeline.addLast(ChunkedWriteHandler())
pipeline.addLast(HttpObjectAggregator(cfg.connection.maxRequestSize))
authenticator?.let {
pipeline.addLast(it)
}
pipeline.addLast(throttlingHandler)
pipeline.addLast(eventExecutorGroup, serverHandler)
pipeline.addLast(exceptionHandler)
pipeline.addLast(ThrottlingHandler(bucketManager, cfg.connection))
val serverHandler = let {
val prefix = Path.of("/").resolve(Path.of(cfg.serverPath ?: "/"))
ServerHandler(prefix)
}
pipeline.addLast(eventExecutorGroup, ServerHandler.NAME, serverHandler)
pipeline.addLast(cacheHandlerFactory.newHandler(ch.eventLoop(), channelFactory, datagramChannelFactory))
pipeline.addLast(TraceHandler)
pipeline.addLast(ExceptionHandler)
}
override fun close() {
cache.close()
}
override fun asyncClose() = cacheHandlerFactory.asyncClose()
}
class ServerHandle(
httpChannelFuture: ChannelFuture,
closeFuture: ChannelFuture,
private val bossGroup: EventExecutorGroup,
private val executorGroups: Iterable<EventExecutorGroup>,
private val serverInitializer: AutoCloseable
) : AutoCloseable {
private val httpChannel: Channel = httpChannelFuture.channel()
private val closeFuture: ChannelFuture = httpChannel.closeFuture()
private val log = contextLogger()
private val serverInitializer: AsyncCloseable,
) : Future<Void> by from(closeFuture, executorGroups, serverInitializer) {
fun shutdown(): Future<Void> {
return httpChannel.close()
}
companion object {
private val log = createLogger<ServerHandle>()
override fun close() {
try {
closeFuture.sync()
} catch (ex: Throwable) {
log.error(ex.message, ex)
}
private fun from(
closeFuture: ChannelFuture,
executorGroups: Iterable<EventExecutorGroup>,
serverInitializer: AsyncCloseable
): CompletableFuture<Void> {
val result = CompletableFuture<Void>()
closeFuture.addListener {
val errors = mutableListOf<Throwable>()
val deadline = Instant.now().plusSeconds(20)
try {
serverInitializer.close()
} catch (ex: Throwable) {
log.error(ex.message, ex)
errors.addLast(ex)
}
executorGroups.forEach {
try {
it.shutdownGracefully().sync()
} catch (ex: Throwable) {
serverInitializer.asyncClose().whenComplete { _, ex ->
if(ex != null) {
log.error(ex.message, ex)
errors.addLast(ex)
}
executorGroups.map {
it.shutdownGracefully()
}
for (executorGroup in executorGroups) {
val future = executorGroup.terminationFuture()
try {
val now = Instant.now()
if (now > deadline) {
future.get(0, TimeUnit.SECONDS)
} else {
future.get(Duration.between(now, deadline).toMillis(), TimeUnit.MILLISECONDS)
}
}
catch (te: TimeoutException) {
errors.addLast(te)
log.warn("Timeout while waiting for shutdown of $executorGroup", te)
} catch (ex: Throwable) {
log.warn(ex.message, ex)
errors.addLast(ex)
}
}
if(errors.isEmpty()) {
result.complete(null)
} else {
result.completeExceptionally(errors.first())
}
}
}
return result.thenAccept {
log.info {
"RemoteBuildCacheServer has been gracefully shut down"
}
}
}
}
fun sendShutdownSignal() {
bossGroup.shutdownGracefully()
}
}
fun run(): ServerHandle {
// Create the multithreaded event loops for the server
val bossGroup = NioEventLoopGroup(1)
val serverSocketChannel = NioServerSocketChannel::class.java
val channelFactory = ChannelFactory<SocketChannel> { NioSocketChannel() }
val datagramChannelFactory = ChannelFactory<DatagramChannel> { NioDatagramChannel() }
val serverChannelFactory = ChannelFactory<ServerSocketChannel> { NioServerSocketChannel() }
val workerGroup = NioEventLoopGroup(0)
val eventExecutorGroup = run {
val threadFactory = if (cfg.eventExecutor.isUseVirtualThreads) {
@@ -432,11 +455,11 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
}
DefaultEventExecutorGroup(Runtime.getRuntime().availableProcessors(), threadFactory)
}
val serverInitializer = ServerInitializer(cfg, eventExecutorGroup)
val serverInitializer = ServerInitializer(cfg, channelFactory, datagramChannelFactory, workerGroup)
val bootstrap = ServerBootstrap().apply {
// Configure the server
group(bossGroup, workerGroup)
channel(serverSocketChannel)
channelFactory(serverChannelFactory)
childHandler(serverInitializer)
option(ChannelOption.SO_BACKLOG, cfg.incomingConnectionsBacklogSize)
childOption(ChannelOption.SO_KEEPALIVE, true)
@@ -445,10 +468,16 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
// Bind and start to accept incoming connections.
val bindAddress = InetSocketAddress(cfg.host, cfg.port)
val httpChannel = bootstrap.bind(bindAddress).sync()
val httpChannel = bootstrap.bind(bindAddress).sync().channel()
log.info {
"RemoteBuildCacheServer is listening on ${cfg.host}:${cfg.port}"
}
return ServerHandle(httpChannel, setOf(bossGroup, workerGroup, eventExecutorGroup), serverInitializer)
return ServerHandle(
httpChannel.closeFuture(),
bossGroup,
setOf(workerGroup, eventExecutorGroup),
serverInitializer
)
}
}

View File

@@ -6,6 +6,7 @@ import io.netty.channel.ChannelHandlerContext
import io.netty.channel.ChannelInboundHandlerAdapter
import io.netty.handler.codec.http.DefaultFullHttpResponse
import io.netty.handler.codec.http.FullHttpResponse
import io.netty.handler.codec.http.HttpContent
import io.netty.handler.codec.http.HttpHeaderNames
import io.netty.handler.codec.http.HttpRequest
import io.netty.handler.codec.http.HttpResponseStatus
@@ -57,6 +58,8 @@ abstract class AbstractNettyHttpAuthenticator(private val authorizer: Authorizer
} else {
authorizationFailure(ctx, msg)
}
} else if(msg is HttpContent) {
ctx.fireChannelRead(msg)
}
}

View File

@@ -1,90 +0,0 @@
package net.woggioni.rbcs.server.auth
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.ChannelInboundHandlerAdapter
import io.netty.handler.ssl.SslHandler
import io.netty.handler.ssl.SslHandshakeCompletionEvent
import java.security.KeyStore
import java.security.cert.CertPathValidator
import java.security.cert.CertPathValidatorException
import java.security.cert.CertificateException
import java.security.cert.CertificateFactory
import java.security.cert.PKIXParameters
import java.security.cert.PKIXRevocationChecker
import java.security.cert.X509Certificate
import java.util.EnumSet
import javax.net.ssl.SSLSession
import javax.net.ssl.TrustManagerFactory
import javax.net.ssl.X509TrustManager
class ClientCertificateValidator private constructor(
private val sslHandler: SslHandler,
private val x509TrustManager: X509TrustManager
) : ChannelInboundHandlerAdapter() {
override fun userEventTriggered(ctx: ChannelHandlerContext, evt: Any) {
if (evt is SslHandshakeCompletionEvent) {
if (evt.isSuccess) {
val session: SSLSession = sslHandler.engine().session
val clientCertificateChain = session.peerCertificates as Array<X509Certificate>
val authType: String = clientCertificateChain[0].publicKey.algorithm
x509TrustManager.checkClientTrusted(clientCertificateChain, authType)
} else {
// Handle the failure, for example by closing the channel.
}
}
super.userEventTriggered(ctx, evt)
}
companion object {
fun getTrustManager(trustStore: KeyStore?, certificateRevocationEnabled: Boolean): X509TrustManager {
return if (trustStore != null) {
val certificateFactory = CertificateFactory.getInstance("X.509")
val validator = CertPathValidator.getInstance("PKIX").apply {
val rc = revocationChecker as PKIXRevocationChecker
rc.options = EnumSet.of(
PKIXRevocationChecker.Option.NO_FALLBACK
)
}
val params = PKIXParameters(trustStore).apply {
isRevocationEnabled = certificateRevocationEnabled
}
object : X509TrustManager {
override fun checkClientTrusted(chain: Array<out X509Certificate>, authType: String) {
val clientCertificateChain = certificateFactory.generateCertPath(chain.toList())
try {
validator.validate(clientCertificateChain, params)
} catch (ex: CertPathValidatorException) {
throw CertificateException(ex)
}
}
override fun checkServerTrusted(chain: Array<out X509Certificate>, authType: String) {
throw NotImplementedError()
}
private val acceptedIssuers = trustStore.aliases().asSequence()
.filter(trustStore::isCertificateEntry)
.map(trustStore::getCertificate)
.map { it as X509Certificate }
.toList()
.toTypedArray()
override fun getAcceptedIssuers() = acceptedIssuers
}
} else {
val trustManagerFactory = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm())
trustManagerFactory.trustManagers.asSequence().filter { it is X509TrustManager }
.single() as X509TrustManager
}
}
fun of(
sslHandler: SslHandler,
trustStore: KeyStore?,
certificateRevocationEnabled: Boolean
): ClientCertificateValidator {
return ClientCertificateValidator(sslHandler, getTrustManager(trustStore, certificateRevocationEnabled))
}
}
}

View File

@@ -1,11 +1,15 @@
package net.woggioni.rbcs.server.cache
import io.netty.buffer.ByteBuf
import net.woggioni.jwo.JWO
import net.woggioni.rbcs.api.Cache
import net.woggioni.rbcs.common.ByteBufInputStream
import net.woggioni.rbcs.common.RBCS.digestString
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.api.AsyncCloseable
import net.woggioni.rbcs.api.CacheValueMetadata
import net.woggioni.rbcs.common.createLogger
import java.io.ByteArrayOutputStream
import java.io.InputStream
import java.io.ObjectInputStream
import java.io.ObjectOutputStream
import java.io.Serializable
import java.nio.ByteBuffer
import java.nio.channels.Channels
import java.nio.channels.FileChannel
import java.nio.file.Files
@@ -13,26 +17,19 @@ import java.nio.file.Path
import java.nio.file.StandardCopyOption
import java.nio.file.StandardOpenOption
import java.nio.file.attribute.BasicFileAttributes
import java.security.MessageDigest
import java.time.Duration
import java.time.Instant
import java.util.concurrent.CompletableFuture
import java.util.zip.Deflater
import java.util.zip.DeflaterOutputStream
import java.util.zip.Inflater
import java.util.zip.InflaterInputStream
class FileSystemCache(
val root: Path,
val maxAge: Duration,
val digestAlgorithm: String?,
val compressionEnabled: Boolean,
val compressionLevel: Int
) : Cache {
val maxAge: Duration
) : AsyncCloseable {
class EntryValue(val metadata: CacheValueMetadata, val channel : FileChannel, val offset : Long, val size : Long) : Serializable
private companion object {
@JvmStatic
private val log = contextLogger()
private val log = createLogger<FileSystemCache>()
}
init {
@@ -44,67 +41,92 @@ class FileSystemCache(
private var nextGc = Instant.now()
override fun get(key: String) = (digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digestString(key.toByteArray(), md)
} ?: key).let { digest ->
root.resolve(digest).takeIf(Files::exists)
fun get(key: String): EntryValue? =
root.resolve(key).takeIf(Files::exists)
?.let { file ->
file.takeIf(Files::exists)?.let { file ->
if (compressionEnabled) {
val inflater = Inflater()
Channels.newChannel(
InflaterInputStream(
Channels.newInputStream(
FileChannel.open(
file,
StandardOpenOption.READ
)
), inflater
)
)
} else {
FileChannel.open(file, StandardOpenOption.READ)
}
}
}.let {
CompletableFuture.completedFuture(it)
val size = Files.size(file)
val channel = FileChannel.open(file, StandardOpenOption.READ)
val source = Channels.newInputStream(channel)
val tmp = ByteArray(Integer.BYTES)
val buffer = ByteBuffer.wrap(tmp)
source.read(tmp)
buffer.rewind()
val offset = (Integer.BYTES + buffer.getInt()).toLong()
var count = 0
val wrapper = object : InputStream() {
override fun read(): Int {
return source.read().also {
if (it > 0) count += it
}
}
override fun put(key: String, content: ByteBuf): CompletableFuture<Void> {
(digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digestString(key.toByteArray(), md)
} ?: key).let { digest ->
val file = root.resolve(digest)
override fun read(b: ByteArray, off: Int, len: Int): Int {
return source.read(b, off, len).also {
if (it > 0) count += it
}
}
override fun close() {
}
}
val metadata = ObjectInputStream(wrapper).use { ois ->
ois.readObject() as CacheValueMetadata
}
EntryValue(metadata, channel, offset, size)
}
class FileSink(metadata: CacheValueMetadata, private val path: Path, private val tmpFile: Path) {
val channel: FileChannel
init {
val baos = ByteArrayOutputStream()
ObjectOutputStream(baos).use {
it.writeObject(metadata)
}
Files.newOutputStream(tmpFile).use {
val bytes = baos.toByteArray()
val buffer = ByteBuffer.allocate(Integer.BYTES)
buffer.putInt(bytes.size)
buffer.rewind()
it.write(buffer.array())
it.write(bytes)
}
channel = FileChannel.open(tmpFile, StandardOpenOption.APPEND)
}
fun commit() {
channel.close()
Files.move(tmpFile, path, StandardCopyOption.ATOMIC_MOVE)
}
fun rollback() {
channel.close()
Files.delete(path)
}
}
fun put(
key: String,
metadata: CacheValueMetadata,
): FileSink {
val file = root.resolve(key)
val tmpFile = Files.createTempFile(root, null, ".tmp")
try {
Files.newOutputStream(tmpFile).let {
if (compressionEnabled) {
val deflater = Deflater(compressionLevel)
DeflaterOutputStream(it, deflater)
} else {
it
}
}.use {
JWO.copy(ByteBufInputStream(content), it)
}
Files.move(tmpFile, file, StandardCopyOption.ATOMIC_MOVE)
} catch (t: Throwable) {
Files.delete(tmpFile)
throw t
}
}
return CompletableFuture.completedFuture(null)
return FileSink(metadata, file, tmpFile)
}
private val garbageCollector = Thread.ofVirtual().name("file-system-cache-gc").start {
private val closeFuture = object : CompletableFuture<Void>() {
init {
Thread.ofVirtual().name("file-system-cache-gc").start {
try {
while (running) {
gc()
}
complete(null)
} catch (ex : Throwable) {
completeExceptionally(ex)
}
}
}
}
private fun gc() {
@@ -119,8 +141,8 @@ class FileSystemCache(
/**
* Returns the creation timestamp of the oldest cache entry (if any)
*/
private fun actualGc(now: Instant) : Instant? {
var result :Instant? = null
private fun actualGc(now: Instant): Instant? {
var result: Instant? = null
Files.list(root)
.filter { path ->
JWO.splitExtension(path)
@@ -132,7 +154,7 @@ class FileSystemCache(
val creationTimeStamp = Files.readAttributes(it, BasicFileAttributes::class.java)
.creationTime()
.toInstant()
if(result == null || creationTimeStamp < result) {
if (result == null || creationTimeStamp < result) {
result = creationTimeStamp
}
now > creationTimeStamp.plus(maxAge)
@@ -140,8 +162,8 @@ class FileSystemCache(
return result
}
override fun close() {
override fun asyncClose() : CompletableFuture<Void> {
running = false
garbageCollector.join()
return closeFuture
}
}

View File

@@ -1,8 +1,13 @@
package net.woggioni.rbcs.server.cache
import io.netty.channel.ChannelFactory
import io.netty.channel.EventLoopGroup
import io.netty.channel.socket.DatagramChannel
import io.netty.channel.socket.SocketChannel
import net.woggioni.jwo.Application
import net.woggioni.rbcs.api.CacheHandlerFactory
import net.woggioni.rbcs.api.Configuration
import net.woggioni.rbcs.common.RBCS
import net.woggioni.jwo.Application
import java.nio.file.Path
import java.time.Duration
@@ -12,14 +17,20 @@ data class FileSystemCacheConfiguration(
val digestAlgorithm : String?,
val compressionEnabled: Boolean,
val compressionLevel: Int,
val chunkSize: Int,
) : Configuration.Cache {
override fun materialize() = FileSystemCache(
root ?: Application.builder("rbcs").build().computeCacheDirectory(),
maxAge,
digestAlgorithm,
compressionEnabled,
compressionLevel
)
override fun materialize() = object : CacheHandlerFactory {
private val cache = FileSystemCache(root ?: Application.builder("rbcs").build().computeCacheDirectory(), maxAge)
override fun asyncClose() = cache.asyncClose()
override fun newHandler(
eventLoop: EventLoopGroup,
socketChannelFactory: ChannelFactory<SocketChannel>,
datagramChannelFactory: ChannelFactory<DatagramChannel>
) = FileSystemCacheHandler(cache, digestAlgorithm, compressionEnabled, compressionLevel, chunkSize)
}
override fun getNamespaceURI() = RBCS.RBCS_NAMESPACE_URI

View File

@@ -0,0 +1,122 @@
package net.woggioni.rbcs.server.cache
import io.netty.buffer.ByteBuf
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.SimpleChannelInboundHandler
import io.netty.handler.codec.http.LastHttpContent
import io.netty.handler.stream.ChunkedNioFile
import net.woggioni.rbcs.api.message.CacheMessage
import net.woggioni.rbcs.api.message.CacheMessage.CacheContent
import net.woggioni.rbcs.api.message.CacheMessage.CacheGetRequest
import net.woggioni.rbcs.api.message.CacheMessage.CachePutRequest
import net.woggioni.rbcs.api.message.CacheMessage.CachePutResponse
import net.woggioni.rbcs.api.message.CacheMessage.CacheValueFoundResponse
import net.woggioni.rbcs.api.message.CacheMessage.CacheValueNotFoundResponse
import net.woggioni.rbcs.api.message.CacheMessage.LastCacheContent
import net.woggioni.rbcs.common.RBCS.processCacheKey
import java.nio.channels.Channels
import java.util.Base64
import java.util.zip.Deflater
import java.util.zip.DeflaterOutputStream
import java.util.zip.InflaterInputStream
class FileSystemCacheHandler(
private val cache: FileSystemCache,
private val digestAlgorithm: String?,
private val compressionEnabled: Boolean,
private val compressionLevel: Int,
private val chunkSize: Int
) : SimpleChannelInboundHandler<CacheMessage>() {
private inner class InProgressPutRequest(
val key : String,
private val fileSink : FileSystemCache.FileSink
) {
private val stream = Channels.newOutputStream(fileSink.channel).let {
if (compressionEnabled) {
DeflaterOutputStream(it, Deflater(compressionLevel))
} else {
it
}
}
fun write(buf: ByteBuf) {
buf.readBytes(stream, buf.readableBytes())
}
fun commit() {
stream.close()
fileSink.commit()
}
fun rollback() {
fileSink.rollback()
}
}
private var inProgressPutRequest: InProgressPutRequest? = null
override fun channelRead0(ctx: ChannelHandlerContext, msg: CacheMessage) {
when (msg) {
is CacheGetRequest -> handleGetRequest(ctx, msg)
is CachePutRequest -> handlePutRequest(ctx, msg)
is LastCacheContent -> handleLastCacheContent(ctx, msg)
is CacheContent -> handleCacheContent(ctx, msg)
else -> ctx.fireChannelRead(msg)
}
}
private fun handleGetRequest(ctx: ChannelHandlerContext, msg: CacheGetRequest) {
val key = String(Base64.getUrlEncoder().encode(processCacheKey(msg.key, digestAlgorithm)))
cache.get(key)?.also { entryValue ->
ctx.writeAndFlush(CacheValueFoundResponse(msg.key, entryValue.metadata))
entryValue.channel.let { channel ->
if(compressionEnabled) {
InflaterInputStream(Channels.newInputStream(channel)).use { stream ->
outerLoop@
while (true) {
val buf = ctx.alloc().heapBuffer(chunkSize)
while(buf.readableBytes() < chunkSize) {
val read = buf.writeBytes(stream, chunkSize)
if(read < 0) {
ctx.writeAndFlush(LastCacheContent(buf))
break@outerLoop
}
}
ctx.writeAndFlush(CacheContent(buf))
}
}
} else {
ctx.writeAndFlush(ChunkedNioFile(channel, entryValue.offset, entryValue.size - entryValue.offset, chunkSize))
ctx.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT)
}
}
} ?: ctx.writeAndFlush(CacheValueNotFoundResponse())
}
private fun handlePutRequest(ctx: ChannelHandlerContext, msg: CachePutRequest) {
val key = String(Base64.getUrlEncoder().encode(processCacheKey(msg.key, digestAlgorithm)))
val sink = cache.put(key, msg.metadata)
inProgressPutRequest = InProgressPutRequest(msg.key, sink)
}
private fun handleCacheContent(ctx: ChannelHandlerContext, msg: CacheContent) {
inProgressPutRequest!!.write(msg.content())
}
private fun handleLastCacheContent(ctx: ChannelHandlerContext, msg: LastCacheContent) {
inProgressPutRequest?.let { request ->
inProgressPutRequest = null
request.write(msg.content())
request.commit()
ctx.writeAndFlush(CachePutResponse(request.key))
}
}
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
inProgressPutRequest?.rollback()
super.exceptionCaught(ctx, cause)
}
}

View File

@@ -30,14 +30,18 @@ class FileSystemCacheProvider : CacheProvider<FileSystemCacheConfiguration> {
val compressionLevel = el.renderAttribute("compression-level")
?.let(String::toInt)
?: Deflater.DEFAULT_COMPRESSION
val digestAlgorithm = el.renderAttribute("digest") ?: "MD5"
val digestAlgorithm = el.renderAttribute("digest")
val chunkSize = el.renderAttribute("chunk-size")
?.let(Integer::decode)
?: 0x10000
return FileSystemCacheConfiguration(
path,
maxAge,
digestAlgorithm,
enableCompression,
compressionLevel
compressionLevel,
chunkSize
)
}
@@ -46,7 +50,9 @@ class FileSystemCacheProvider : CacheProvider<FileSystemCacheConfiguration> {
Xml.of(doc, result) {
val prefix = doc.lookupPrefix(RBCS.RBCS_NAMESPACE_URI)
attr("xs:type", "${prefix}:fileSystemCacheType", RBCS.XML_SCHEMA_NAMESPACE_URI)
attr("path", root.toString())
root?.let {
attr("path", it.toString())
}
attr("max-age", maxAge.toString())
digestAlgorithm?.let { digestAlgorithm ->
attr("digest", digestAlgorithm)
@@ -57,6 +63,7 @@ class FileSystemCacheProvider : CacheProvider<FileSystemCacheConfiguration> {
}?.let {
attr("compression-level", it.toString())
}
attr("chunk-size", chunkSize.toString())
}
result
}

View File

@@ -1,42 +1,44 @@
package net.woggioni.rbcs.server.cache
import io.netty.buffer.ByteBuf
import net.woggioni.jwo.JWO
import net.woggioni.rbcs.api.Cache
import net.woggioni.rbcs.common.ByteBufInputStream
import net.woggioni.rbcs.common.ByteBufOutputStream
import net.woggioni.rbcs.common.RBCS.digestString
import net.woggioni.rbcs.common.contextLogger
import java.nio.channels.Channels
import java.security.MessageDigest
import net.woggioni.rbcs.api.AsyncCloseable
import net.woggioni.rbcs.api.CacheValueMetadata
import net.woggioni.rbcs.common.createLogger
import java.time.Duration
import java.time.Instant
import java.util.concurrent.CompletableFuture
import java.util.concurrent.ConcurrentHashMap
import java.util.concurrent.PriorityBlockingQueue
import java.util.concurrent.TimeUnit
import java.util.concurrent.atomic.AtomicLong
import java.util.zip.Deflater
import java.util.zip.DeflaterOutputStream
import java.util.zip.Inflater
import java.util.zip.InflaterInputStream
private class CacheKey(private val value: ByteArray) {
override fun equals(other: Any?) = if (other is CacheKey) {
value.contentEquals(other.value)
} else false
override fun hashCode() = value.contentHashCode()
}
class CacheEntry(
val metadata: CacheValueMetadata,
val content: ByteBuf
)
class InMemoryCache(
val maxAge: Duration,
val maxSize: Long,
val digestAlgorithm: String?,
val compressionEnabled: Boolean,
val compressionLevel: Int
) : Cache {
private val maxAge: Duration,
private val maxSize: Long
) : AsyncCloseable {
companion object {
@JvmStatic
private val log = contextLogger()
private val log = createLogger<InMemoryCache>()
}
private val size = AtomicLong()
private val map = ConcurrentHashMap<String, ByteBuf>()
private val map = ConcurrentHashMap<CacheKey, CacheEntry>()
private class RemovalQueueElement(val key: String, val value : ByteBuf, val expiry : Instant) : Comparable<RemovalQueueElement> {
private class RemovalQueueElement(val key: CacheKey, val value: CacheEntry, val expiry: Instant) :
Comparable<RemovalQueueElement> {
override fun compareTo(other: RemovalQueueElement) = expiry.compareTo(other.expiry)
}
@@ -45,106 +47,80 @@ class InMemoryCache(
@Volatile
private var running = true
private val garbageCollector = Thread.ofVirtual().name("in-memory-cache-gc").start {
while(running) {
val el = removalQueue.take()
val buf = el.value
private val closeFuture = object : CompletableFuture<Void>() {
init {
Thread.ofVirtual().name("in-memory-cache-gc").start {
try {
while (running) {
val el = removalQueue.poll(1, TimeUnit.SECONDS) ?: continue
val value = el.value
val now = Instant.now()
if(now > el.expiry) {
val removed = map.remove(el.key, buf)
if(removed) {
updateSizeAfterRemoval(buf)
if (now > el.expiry) {
val removed = map.remove(el.key, value)
if (removed) {
updateSizeAfterRemoval(value.content)
//Decrease the reference count for map
buf.release()
value.content.release()
}
//Decrease the reference count for removalQueue
buf.release()
} else {
removalQueue.put(el)
Thread.sleep(minOf(Duration.between(now, el.expiry), Duration.ofSeconds(1)))
}
}
complete(null)
} catch (ex: Throwable) {
completeExceptionally(ex)
}
}
}
}
private fun removeEldest() : Long {
while(true) {
fun removeEldest(): Long {
while (true) {
val el = removalQueue.take()
val buf = el.value
val removed = map.remove(el.key, buf)
//Decrease the reference count for removalQueue
buf.release()
if(removed) {
val newSize = updateSizeAfterRemoval(buf)
val value = el.value
val removed = map.remove(el.key, value)
if (removed) {
val newSize = updateSizeAfterRemoval(value.content)
//Decrease the reference count for map
buf.release()
value.content.release()
return newSize
}
}
}
private fun updateSizeAfterRemoval(removed: ByteBuf) : Long {
return size.updateAndGet { currentSize : Long ->
private fun updateSizeAfterRemoval(removed: ByteBuf): Long {
return size.updateAndGet { currentSize: Long ->
currentSize - removed.readableBytes()
}
}
override fun close() {
override fun asyncClose() : CompletableFuture<Void> {
running = false
garbageCollector.join()
return closeFuture
}
override fun get(key: String) =
(digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digestString(key.toByteArray(), md)
} ?: key
).let { digest ->
map[digest]
?.let { value ->
val copy = value.retainedDuplicate()
copy.touch("This has to be released by the caller of the cache")
if (compressionEnabled) {
val inflater = Inflater()
Channels.newChannel(InflaterInputStream(ByteBufInputStream(copy), inflater))
} else {
Channels.newChannel(ByteBufInputStream(copy))
}
}
}.let {
CompletableFuture.completedFuture(it)
fun get(key: ByteArray) = map[CacheKey(key)]?.run {
CacheEntry(metadata, content.retainedDuplicate())
}
override fun put(key: String, content: ByteBuf) =
(digestAlgorithm
?.let(MessageDigest::getInstance)
?.let { md ->
digestString(key.toByteArray(), md)
} ?: key).let { digest ->
content.retain()
val value = if (compressionEnabled) {
val deflater = Deflater(compressionLevel)
val buf = content.alloc().buffer()
buf.retain()
DeflaterOutputStream(ByteBufOutputStream(buf), deflater).use { outputStream ->
ByteBufInputStream(content).use { inputStream ->
JWO.copy(inputStream, outputStream)
}
}
buf
} else {
content
}
val old = map.put(digest, value)
val delta = value.readableBytes() - (old?.readableBytes() ?: 0)
var newSize = size.updateAndGet { currentSize : Long ->
fun put(
key: ByteArray,
value: CacheEntry,
) {
val cacheKey = CacheKey(key)
val oldSize = map.put(cacheKey, value)?.let { old ->
val result = old.content.readableBytes()
old.content.release()
result
} ?: 0
val delta = value.content.readableBytes() - oldSize
var newSize = size.updateAndGet { currentSize: Long ->
currentSize + delta
}
removalQueue.put(RemovalQueueElement(digest, value.retain(), Instant.now().plus(maxAge)))
while(newSize > maxSize) {
removalQueue.put(RemovalQueueElement(cacheKey, value, Instant.now().plus(maxAge)))
while (newSize > maxSize) {
newSize = removeEldest()
}
}.let {
CompletableFuture.completedFuture<Void>(null)
}
}

View File

@@ -1,5 +1,11 @@
package net.woggioni.rbcs.server.cache
import io.netty.channel.ChannelFactory
import io.netty.channel.EventLoopGroup
import io.netty.channel.socket.DatagramChannel
import io.netty.channel.socket.SocketChannel
import io.netty.util.concurrent.Future
import net.woggioni.rbcs.api.CacheHandlerFactory
import net.woggioni.rbcs.api.Configuration
import net.woggioni.rbcs.common.RBCS
import java.time.Duration
@@ -10,14 +16,19 @@ data class InMemoryCacheConfiguration(
val digestAlgorithm : String?,
val compressionEnabled: Boolean,
val compressionLevel: Int,
val chunkSize : Int
) : Configuration.Cache {
override fun materialize() = InMemoryCache(
maxAge,
maxSize,
digestAlgorithm,
compressionEnabled,
compressionLevel
)
override fun materialize() = object : CacheHandlerFactory {
private val cache = InMemoryCache(maxAge, maxSize)
override fun asyncClose() = cache.asyncClose()
override fun newHandler(
eventLoop: EventLoopGroup,
socketChannelFactory: ChannelFactory<SocketChannel>,
datagramChannelFactory: ChannelFactory<DatagramChannel>
) = InMemoryCacheHandler(cache, digestAlgorithm, compressionEnabled, compressionLevel)
}
override fun getNamespaceURI() = RBCS.RBCS_NAMESPACE_URI

View File

@@ -0,0 +1,136 @@
package net.woggioni.rbcs.server.cache
import io.netty.buffer.ByteBuf
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.SimpleChannelInboundHandler
import net.woggioni.rbcs.api.message.CacheMessage
import net.woggioni.rbcs.api.message.CacheMessage.CacheContent
import net.woggioni.rbcs.api.message.CacheMessage.CacheGetRequest
import net.woggioni.rbcs.api.message.CacheMessage.CachePutRequest
import net.woggioni.rbcs.api.message.CacheMessage.CachePutResponse
import net.woggioni.rbcs.api.message.CacheMessage.CacheValueFoundResponse
import net.woggioni.rbcs.api.message.CacheMessage.CacheValueNotFoundResponse
import net.woggioni.rbcs.api.message.CacheMessage.LastCacheContent
import net.woggioni.rbcs.common.ByteBufOutputStream
import net.woggioni.rbcs.common.RBCS.processCacheKey
import java.util.zip.Deflater
import java.util.zip.DeflaterOutputStream
import java.util.zip.InflaterOutputStream
class InMemoryCacheHandler(
private val cache: InMemoryCache,
private val digestAlgorithm: String?,
private val compressionEnabled: Boolean,
private val compressionLevel: Int
) : SimpleChannelInboundHandler<CacheMessage>() {
private interface InProgressPutRequest : AutoCloseable {
val request: CachePutRequest
val buf: ByteBuf
fun append(buf: ByteBuf)
}
private inner class InProgressPlainPutRequest(ctx: ChannelHandlerContext, override val request: CachePutRequest) :
InProgressPutRequest {
override val buf = ctx.alloc().compositeBuffer()
private val stream = ByteBufOutputStream(buf).let {
if (compressionEnabled) {
DeflaterOutputStream(it, Deflater(compressionLevel))
} else {
it
}
}
override fun append(buf: ByteBuf) {
this.buf.addComponent(true, buf.retain())
}
override fun close() {
buf.release()
}
}
private inner class InProgressCompressedPutRequest(
ctx: ChannelHandlerContext,
override val request: CachePutRequest
) : InProgressPutRequest {
override val buf = ctx.alloc().heapBuffer()
private val stream = ByteBufOutputStream(buf).let {
DeflaterOutputStream(it, Deflater(compressionLevel))
}
override fun append(buf: ByteBuf) {
buf.readBytes(stream, buf.readableBytes())
}
override fun close() {
stream.close()
}
}
private var inProgressPutRequest: InProgressPutRequest? = null
override fun channelRead0(ctx: ChannelHandlerContext, msg: CacheMessage) {
when (msg) {
is CacheGetRequest -> handleGetRequest(ctx, msg)
is CachePutRequest -> handlePutRequest(ctx, msg)
is LastCacheContent -> handleLastCacheContent(ctx, msg)
is CacheContent -> handleCacheContent(ctx, msg)
else -> ctx.fireChannelRead(msg)
}
}
private fun handleGetRequest(ctx: ChannelHandlerContext, msg: CacheGetRequest) {
cache.get(processCacheKey(msg.key, digestAlgorithm))?.let { value ->
ctx.writeAndFlush(CacheValueFoundResponse(msg.key, value.metadata))
if (compressionEnabled) {
val buf = ctx.alloc().heapBuffer()
InflaterOutputStream(ByteBufOutputStream(buf)).use {
value.content.readBytes(it, value.content.readableBytes())
value.content.release()
buf.retain()
}
ctx.writeAndFlush(LastCacheContent(buf))
} else {
ctx.writeAndFlush(LastCacheContent(value.content))
}
} ?: ctx.writeAndFlush(CacheValueNotFoundResponse())
}
private fun handlePutRequest(ctx: ChannelHandlerContext, msg: CachePutRequest) {
inProgressPutRequest = if(compressionEnabled) {
InProgressCompressedPutRequest(ctx, msg)
} else {
InProgressPlainPutRequest(ctx, msg)
}
}
private fun handleCacheContent(ctx: ChannelHandlerContext, msg: CacheContent) {
inProgressPutRequest?.append(msg.content())
}
private fun handleLastCacheContent(ctx: ChannelHandlerContext, msg: LastCacheContent) {
handleCacheContent(ctx, msg)
inProgressPutRequest?.let { inProgressRequest ->
inProgressPutRequest = null
val buf = inProgressRequest.buf
buf.retain()
inProgressRequest.close()
val cacheKey = processCacheKey(inProgressRequest.request.key, digestAlgorithm)
cache.put(cacheKey, CacheEntry(inProgressRequest.request.metadata, buf))
ctx.writeAndFlush(CachePutResponse(inProgressRequest.request.key))
}
}
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
inProgressPutRequest?.let { req ->
req.buf.release()
inProgressPutRequest = null
}
super.exceptionCaught(ctx, cause)
}
}

View File

@@ -30,14 +30,17 @@ class InMemoryCacheProvider : CacheProvider<InMemoryCacheConfiguration> {
val compressionLevel = el.renderAttribute("compression-level")
?.let(String::toInt)
?: Deflater.DEFAULT_COMPRESSION
val digestAlgorithm = el.renderAttribute("digest") ?: "MD5"
val digestAlgorithm = el.renderAttribute("digest")
val chunkSize = el.renderAttribute("chunk-size")
?.let(Integer::decode)
?: 0x10000
return InMemoryCacheConfiguration(
maxAge,
maxSize,
digestAlgorithm,
enableCompression,
compressionLevel
compressionLevel,
chunkSize
)
}
@@ -57,6 +60,7 @@ class InMemoryCacheProvider : CacheProvider<InMemoryCacheConfiguration> {
}?.let {
attr("compression-level", it.toString())
}
attr("chunk-size", chunkSize.toString())
}
result
}

View File

@@ -27,8 +27,6 @@ object Parser {
val root = document.documentElement
val anonymousUser = User("", null, emptySet(), null)
var connection: Configuration.Connection = Configuration.Connection(
Duration.of(10, ChronoUnit.SECONDS),
Duration.of(10, ChronoUnit.SECONDS),
Duration.of(60, ChronoUnit.SECONDS),
Duration.of(30, ChronoUnit.SECONDS),
Duration.of(30, ChronoUnit.SECONDS),
@@ -113,10 +111,6 @@ object Parser {
}
"connection" -> {
val writeTimeout = child.renderAttribute("write-timeout")
?.let(Duration::parse) ?: Duration.of(0, ChronoUnit.SECONDS)
val readTimeout = child.renderAttribute("read-timeout")
?.let(Duration::parse) ?: Duration.of(0, ChronoUnit.SECONDS)
val idleTimeout = child.renderAttribute("idle-timeout")
?.let(Duration::parse) ?: Duration.of(30, ChronoUnit.SECONDS)
val readIdleTimeout = child.renderAttribute("read-idle-timeout")
@@ -124,10 +118,8 @@ object Parser {
val writeIdleTimeout = child.renderAttribute("write-idle-timeout")
?.let(Duration::parse) ?: Duration.of(60, ChronoUnit.SECONDS)
val maxRequestSize = child.renderAttribute("max-request-size")
?.let(String::toInt) ?: 67108864
?.let(Integer::decode) ?: 0x4000000
connection = Configuration.Connection(
readTimeout,
writeTimeout,
idleTimeout,
readIdleTimeout,
writeIdleTimeout,

View File

@@ -36,8 +36,6 @@ object Serializer {
}
node("connection") {
conf.connection.let { connection ->
attr("read-timeout", connection.readTimeout.toString())
attr("write-timeout", connection.writeTimeout.toString())
attr("idle-timeout", connection.idleTimeout.toString())
attr("read-idle-timeout", connection.readIdleTimeout.toString())
attr("write-idle-timeout", connection.writeIdleTimeout.toString())

View File

@@ -3,7 +3,7 @@ package net.woggioni.rbcs.server.exception
import io.netty.buffer.Unpooled
import io.netty.channel.ChannelDuplexHandler
import io.netty.channel.ChannelFutureListener
import io.netty.channel.ChannelHandler
import io.netty.channel.ChannelHandler.Sharable
import io.netty.channel.ChannelHandlerContext
import io.netty.handler.codec.DecoderException
import io.netty.handler.codec.http.DefaultFullHttpResponse
@@ -17,10 +17,16 @@ import net.woggioni.rbcs.api.exception.CacheException
import net.woggioni.rbcs.api.exception.ContentTooLargeException
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.common.debug
import net.woggioni.rbcs.common.log
import org.slf4j.event.Level
import org.slf4j.spi.LoggingEventBuilder
import java.net.ConnectException
import java.net.SocketException
import javax.net.ssl.SSLException
import javax.net.ssl.SSLPeerUnverifiedException
@ChannelHandler.Sharable
class ExceptionHandler : ChannelDuplexHandler() {
@Sharable
object ExceptionHandler : ChannelDuplexHandler() {
private val log = contextLogger()
private val NOT_AUTHORIZED: FullHttpResponse = DefaultFullHttpResponse(
@@ -29,12 +35,6 @@ class ExceptionHandler : ChannelDuplexHandler() {
headers()[HttpHeaderNames.CONTENT_LENGTH] = "0"
}
private val TOO_BIG: FullHttpResponse = DefaultFullHttpResponse(
HttpVersion.HTTP_1_1, HttpResponseStatus.REQUEST_ENTITY_TOO_LARGE, Unpooled.EMPTY_BUFFER
).apply {
headers()[HttpHeaderNames.CONTENT_LENGTH] = "0"
}
private val NOT_AVAILABLE: FullHttpResponse = DefaultFullHttpResponse(
HttpVersion.HTTP_1_1, HttpResponseStatus.SERVICE_UNAVAILABLE, Unpooled.EMPTY_BUFFER
).apply {
@@ -47,10 +47,26 @@ class ExceptionHandler : ChannelDuplexHandler() {
headers()[HttpHeaderNames.CONTENT_LENGTH] = "0"
}
private val TOO_BIG: FullHttpResponse = DefaultFullHttpResponse(
HttpVersion.HTTP_1_1, HttpResponseStatus.REQUEST_ENTITY_TOO_LARGE, Unpooled.EMPTY_BUFFER
).apply {
headers()[HttpHeaderNames.CONTENT_LENGTH] = "0"
}
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
when (cause) {
is DecoderException -> {
log.debug(cause.message, cause)
ctx.close()
}
is ConnectException -> {
log.error(cause.message, cause)
ctx.writeAndFlush(SERVER_ERROR.retainedDuplicate())
}
is SocketException -> {
log.debug(cause.message, cause)
ctx.close()
}
@@ -59,10 +75,19 @@ class ExceptionHandler : ChannelDuplexHandler() {
.addListener(ChannelFutureListener.CLOSE_ON_FAILURE)
}
is SSLException -> {
log.debug(cause.message, cause)
ctx.close()
}
is ContentTooLargeException -> {
log.log(Level.DEBUG, ctx.channel()) { builder : LoggingEventBuilder ->
builder.setMessage("Request body is too large")
}
ctx.writeAndFlush(TOO_BIG.retainedDuplicate())
.addListener(ChannelFutureListener.CLOSE_ON_FAILURE)
}
is ReadTimeoutException -> {
log.debug {
val channelId = ctx.channel().id().asShortText()
@@ -70,6 +95,7 @@ class ExceptionHandler : ChannelDuplexHandler() {
}
ctx.close()
}
is WriteTimeoutException -> {
log.debug {
val channelId = ctx.channel().id().asShortText()
@@ -77,11 +103,13 @@ class ExceptionHandler : ChannelDuplexHandler() {
}
ctx.close()
}
is CacheException -> {
log.error(cause.message, cause)
ctx.writeAndFlush(NOT_AVAILABLE.retainedDuplicate())
.addListener(ChannelFutureListener.CLOSE_ON_FAILURE)
}
else -> {
log.error(cause.message, cause)
ctx.writeAndFlush(SERVER_ERROR.retainedDuplicate())

View File

@@ -0,0 +1,28 @@
package net.woggioni.rbcs.server.handler
import io.netty.channel.ChannelHandler.Sharable
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.SimpleChannelInboundHandler
import io.netty.handler.codec.http.HttpContent
import io.netty.handler.codec.http.LastHttpContent
import net.woggioni.rbcs.api.message.CacheMessage.CacheContent
import net.woggioni.rbcs.api.message.CacheMessage.LastCacheContent
@Sharable
object CacheContentHandler : SimpleChannelInboundHandler<HttpContent>() {
val NAME = this::class.java.name
override fun channelRead0(ctx: ChannelHandlerContext, msg: HttpContent) {
when(msg) {
is LastHttpContent -> {
ctx.fireChannelRead(LastCacheContent(msg.content().retain()))
ctx.pipeline().remove(this)
}
else -> ctx.fireChannelRead(CacheContent(msg.content().retain()))
}
}
override fun exceptionCaught(ctx: ChannelHandlerContext?, cause: Throwable?) {
super.exceptionCaught(ctx, cause)
}
}

View File

@@ -0,0 +1,40 @@
package net.woggioni.rbcs.server.handler
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.ChannelInboundHandlerAdapter
import io.netty.handler.codec.http.HttpContent
import io.netty.handler.codec.http.HttpRequest
import net.woggioni.rbcs.api.exception.ContentTooLargeException
class MaxRequestSizeHandler(private val maxRequestSize : Int) : ChannelInboundHandlerAdapter() {
companion object {
val NAME = MaxRequestSizeHandler::class.java.name
}
private var cumulativeSize = 0
override fun channelRead(ctx: ChannelHandlerContext, msg: Any) {
when(msg) {
is HttpRequest -> {
cumulativeSize = 0
ctx.fireChannelRead(msg)
}
is HttpContent -> {
val exceeded = cumulativeSize > maxRequestSize
if(!exceeded) {
cumulativeSize += msg.content().readableBytes()
}
if(cumulativeSize > maxRequestSize) {
msg.release()
if(!exceeded) {
ctx.fireExceptionCaught(ContentTooLargeException("Request body is too large", null))
}
} else {
ctx.fireChannelRead(msg)
}
}
else -> ctx.fireChannelRead(msg)
}
}
}

View File

@@ -1,95 +1,148 @@
package net.woggioni.rbcs.server.handler
import io.netty.buffer.Unpooled
import io.netty.channel.ChannelFutureListener
import io.netty.channel.ChannelHandler
import io.netty.channel.ChannelDuplexHandler
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.DefaultFileRegion
import io.netty.channel.SimpleChannelInboundHandler
import io.netty.channel.ChannelPromise
import io.netty.handler.codec.http.DefaultFullHttpResponse
import io.netty.handler.codec.http.DefaultHttpContent
import io.netty.handler.codec.http.DefaultHttpResponse
import io.netty.handler.codec.http.FullHttpRequest
import io.netty.handler.codec.http.DefaultLastHttpContent
import io.netty.handler.codec.http.HttpHeaderNames
import io.netty.handler.codec.http.HttpHeaderValues
import io.netty.handler.codec.http.HttpHeaders
import io.netty.handler.codec.http.HttpMethod
import io.netty.handler.codec.http.HttpRequest
import io.netty.handler.codec.http.HttpResponseStatus
import io.netty.handler.codec.http.HttpUtil
import io.netty.handler.codec.http.LastHttpContent
import io.netty.handler.stream.ChunkedNioStream
import net.woggioni.rbcs.api.Cache
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.server.debug
import net.woggioni.rbcs.server.warn
import java.nio.channels.FileChannel
import io.netty.handler.codec.http.HttpVersion
import net.woggioni.rbcs.api.CacheValueMetadata
import net.woggioni.rbcs.api.message.CacheMessage
import net.woggioni.rbcs.api.message.CacheMessage.CacheContent
import net.woggioni.rbcs.api.message.CacheMessage.CacheGetRequest
import net.woggioni.rbcs.api.message.CacheMessage.CachePutRequest
import net.woggioni.rbcs.api.message.CacheMessage.CachePutResponse
import net.woggioni.rbcs.api.message.CacheMessage.CacheValueFoundResponse
import net.woggioni.rbcs.api.message.CacheMessage.CacheValueNotFoundResponse
import net.woggioni.rbcs.api.message.CacheMessage.LastCacheContent
import net.woggioni.rbcs.common.createLogger
import net.woggioni.rbcs.common.debug
import net.woggioni.rbcs.common.warn
import java.nio.file.Path
import java.util.Locale
@ChannelHandler.Sharable
class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
SimpleChannelInboundHandler<FullHttpRequest>() {
class ServerHandler(private val serverPrefix: Path) :
ChannelDuplexHandler() {
private val log = contextLogger()
companion object {
private val log = createLogger<ServerHandler>()
val NAME = this::class.java.name
}
override fun channelRead0(ctx: ChannelHandlerContext, msg: FullHttpRequest) {
val keepAlive: Boolean = HttpUtil.isKeepAlive(msg)
private var httpVersion = HttpVersion.HTTP_1_1
private var keepAlive = true
private fun resetRequestMetadata() {
httpVersion = HttpVersion.HTTP_1_1
keepAlive = true
}
private fun setRequestMetadata(req: HttpRequest) {
httpVersion = req.protocolVersion()
keepAlive = HttpUtil.isKeepAlive(req)
}
private fun setKeepAliveHeader(headers: HttpHeaders) {
if (!keepAlive) {
headers.set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE)
} else {
headers.set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE)
}
}
override fun channelRead(ctx: ChannelHandlerContext, msg: Any) {
when (msg) {
is HttpRequest -> handleRequest(ctx, msg)
else -> super.channelRead(ctx, msg)
}
}
override fun write(ctx: ChannelHandlerContext, msg: Any, promise: ChannelPromise?) {
if (msg is CacheMessage) {
try {
when (msg) {
is CachePutResponse -> {
val response = DefaultFullHttpResponse(httpVersion, HttpResponseStatus.CREATED)
val keyBytes = msg.key.toByteArray(Charsets.UTF_8)
response.headers().apply {
set(HttpHeaderNames.CONTENT_TYPE, HttpHeaderValues.TEXT_PLAIN)
set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED)
}
setKeepAliveHeader(response.headers())
ctx.write(response)
val buf = ctx.alloc().buffer(keyBytes.size).apply {
writeBytes(keyBytes)
}
ctx.writeAndFlush(DefaultLastHttpContent(buf))
}
is CacheValueNotFoundResponse -> {
val response = DefaultFullHttpResponse(httpVersion, HttpResponseStatus.NOT_FOUND)
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = 0
setKeepAliveHeader(response.headers())
ctx.writeAndFlush(response)
}
is CacheValueFoundResponse -> {
val response = DefaultHttpResponse(httpVersion, HttpResponseStatus.OK)
response.headers().apply {
set(HttpHeaderNames.CONTENT_TYPE, msg.metadata.mimeType ?: HttpHeaderValues.APPLICATION_OCTET_STREAM)
msg.metadata.contentDisposition?.let { contentDisposition ->
set(HttpHeaderNames.CONTENT_DISPOSITION, contentDisposition)
}
}
setKeepAliveHeader(response.headers())
response.headers().set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED)
ctx.writeAndFlush(response)
}
is LastCacheContent -> {
ctx.writeAndFlush(DefaultLastHttpContent(msg.content()))
}
is CacheContent -> {
ctx.writeAndFlush(DefaultHttpContent(msg.content()))
}
else -> throw UnsupportedOperationException("This should never happen")
}.let { channelFuture ->
if (promise != null) {
channelFuture.addListener {
if (it.isSuccess) promise.setSuccess()
else promise.setFailure(it.cause())
}
}
}
} finally {
resetRequestMetadata()
}
} else super.write(ctx, msg, promise)
}
private fun handleRequest(ctx: ChannelHandlerContext, msg: HttpRequest) {
setRequestMetadata(msg)
val method = msg.method()
if (method === HttpMethod.GET) {
val path = Path.of(msg.uri())
val prefix = path.parent
val key = path.fileName?.toString() ?: let {
val response = DefaultFullHttpResponse(msg.protocolVersion(), HttpResponseStatus.NOT_FOUND)
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = 0
ctx.writeAndFlush(response)
return
}
if (serverPrefix == prefix) {
cache.get(key).thenApply { channel ->
if(channel != null) {
log.debug(ctx) {
"Cache hit for key '$key'"
}
val response = DefaultHttpResponse(msg.protocolVersion(), HttpResponseStatus.OK)
response.headers()[HttpHeaderNames.CONTENT_TYPE] = HttpHeaderValues.APPLICATION_OCTET_STREAM
if (!keepAlive) {
response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.CLOSE)
response.headers().set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.IDENTITY)
} else {
response.headers().set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE)
response.headers().set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED)
}
ctx.write(response)
when (channel) {
is FileChannel -> {
val content = DefaultFileRegion(channel, 0, channel.size())
if (keepAlive) {
ctx.write(content)
ctx.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT.retainedDuplicate())
} else {
ctx.writeAndFlush(content)
.addListener(ChannelFutureListener.CLOSE)
}
}
else -> {
val content = ChunkedNioStream(channel)
if (keepAlive) {
ctx.write(content).addListener {
content.close()
}
ctx.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT.retainedDuplicate())
} else {
ctx.writeAndFlush(content)
.addListener(ChannelFutureListener.CLOSE)
}
}
}
} else {
log.debug(ctx) {
"Cache miss for key '$key'"
}
val response = DefaultFullHttpResponse(msg.protocolVersion(), HttpResponseStatus.NOT_FOUND)
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = 0
ctx.writeAndFlush(response)
}
}.whenComplete { _, ex -> ex?.let(ctx::fireExceptionCaught) }
val path = Path.of(msg.uri()).normalize()
if (path.startsWith(serverPrefix)) {
val relativePath = serverPrefix.relativize(path)
val key = relativePath.toString()
ctx.pipeline().addAfter(NAME, CacheContentHandler.NAME, CacheContentHandler)
key.let(::CacheGetRequest)
.let(ctx::fireChannelRead)
?: ctx.channel().write(CacheValueNotFoundResponse())
} else {
log.warn(ctx) {
"Got request for unhandled path '${msg.uri()}'"
@@ -99,24 +152,21 @@ class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
ctx.writeAndFlush(response)
}
} else if (method === HttpMethod.PUT) {
val path = Path.of(msg.uri())
val prefix = path.parent
val key = path.fileName.toString()
if (serverPrefix == prefix) {
val path = Path.of(msg.uri()).normalize()
if (path.startsWith(serverPrefix)) {
val relativePath = serverPrefix.relativize(path)
val key = relativePath.toString()
log.debug(ctx) {
"Added value for key '$key' to build cache"
}
cache.put(key, msg.content()).thenRun {
val response = DefaultFullHttpResponse(
msg.protocolVersion(), HttpResponseStatus.CREATED,
Unpooled.copiedBuffer(key.toByteArray())
)
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = response.content().readableBytes()
ctx.writeAndFlush(response)
}.whenComplete { _, ex ->
ctx.fireExceptionCaught(ex)
ctx.pipeline().addAfter(NAME, CacheContentHandler.NAME, CacheContentHandler)
path.fileName?.toString()
?.let {
val mimeType = HttpUtil.getMimeType(msg)?.toString()
CachePutRequest(key, CacheValueMetadata(msg.headers().get(HttpHeaderNames.CONTENT_DISPOSITION), mimeType))
}
?.let(ctx::fireChannelRead)
?: ctx.channel().write(CacheValueNotFoundResponse())
} else {
log.warn(ctx) {
"Got request for unhandled path '${msg.uri()}'"
@@ -125,30 +175,8 @@ class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = "0"
ctx.writeAndFlush(response)
}
} else if(method == HttpMethod.TRACE) {
val replayedRequestHead = ctx.alloc().buffer()
replayedRequestHead.writeCharSequence("TRACE ${Path.of(msg.uri())} ${msg.protocolVersion().text()}\r\n", Charsets.US_ASCII)
msg.headers().forEach { (key, value) ->
replayedRequestHead.apply {
writeCharSequence(key, Charsets.US_ASCII)
writeCharSequence(": ", Charsets.US_ASCII)
writeCharSequence(value, Charsets.UTF_8)
writeCharSequence("\r\n", Charsets.US_ASCII)
}
}
replayedRequestHead.writeCharSequence("\r\n", Charsets.US_ASCII)
val requestBody = msg.content()
requestBody.retain()
val responseBody = ctx.alloc().compositeBuffer(2).apply {
addComponents(true, replayedRequestHead)
addComponents(true, requestBody)
}
val response = DefaultFullHttpResponse(msg.protocolVersion(), HttpResponseStatus.OK, responseBody)
response.headers().apply {
set(HttpHeaderNames.CONTENT_TYPE, "message/http")
set(HttpHeaderNames.CONTENT_LENGTH, responseBody.readableBytes())
}
ctx.writeAndFlush(response)
} else if (method == HttpMethod.TRACE) {
super.channelRead(ctx, msg)
} else {
log.warn(ctx) {
"Got request with unhandled method '${msg.method().name()}'"
@@ -158,4 +186,44 @@ class ServerHandler(private val cache: Cache, private val serverPrefix: Path) :
ctx.writeAndFlush(response)
}
}
data class ContentDisposition(val type: Type?, val fileName: String?) {
enum class Type {
attachment, `inline`;
companion object {
@JvmStatic
fun parse(maybeString: String?) = maybeString.let { s ->
try {
java.lang.Enum.valueOf(Type::class.java, s)
} catch (ex: IllegalArgumentException) {
null
}
}
}
}
companion object {
@JvmStatic
fun parse(contentDisposition: String) : ContentDisposition {
val parts = contentDisposition.split(";").dropLastWhile { it.isEmpty() }.toTypedArray()
val dispositionType = parts[0].trim { it <= ' ' }.let(Type::parse) // Get the type (e.g., attachment)
var filename: String? = null
for (i in 1..<parts.size) {
val part = parts[i].trim { it <= ' ' }
if (part.lowercase(Locale.getDefault()).startsWith("filename=")) {
filename = part.substring("filename=".length).trim { it <= ' ' }.replace("\"", "")
break
}
}
return ContentDisposition(dispositionType, filename)
}
}
}
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
super.exceptionCaught(ctx, cause)
}
}

View File

@@ -0,0 +1,54 @@
package net.woggioni.rbcs.server.handler
import io.netty.channel.ChannelHandler.Sharable
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.ChannelInboundHandlerAdapter
import io.netty.handler.codec.http.DefaultHttpResponse
import io.netty.handler.codec.http.HttpContent
import io.netty.handler.codec.http.HttpHeaderNames
import io.netty.handler.codec.http.HttpHeaderValues
import io.netty.handler.codec.http.HttpRequest
import io.netty.handler.codec.http.HttpResponseStatus
import io.netty.handler.codec.http.LastHttpContent
import java.nio.file.Path
@Sharable
object TraceHandler : ChannelInboundHandlerAdapter() {
val NAME = this::class.java.name
override fun channelRead(ctx: ChannelHandlerContext, msg: Any) {
when(msg) {
is HttpRequest -> {
val response = DefaultHttpResponse(msg.protocolVersion(), HttpResponseStatus.OK)
response.headers().apply {
set(HttpHeaderNames.CONTENT_TYPE, "message/http")
set(HttpHeaderNames.TRANSFER_ENCODING, HttpHeaderValues.CHUNKED)
}
ctx.write(response)
val replayedRequestHead = ctx.alloc().buffer()
replayedRequestHead.writeCharSequence(
"TRACE ${Path.of(msg.uri())} ${msg.protocolVersion().text()}\r\n",
Charsets.US_ASCII
)
msg.headers().forEach { (key, value) ->
replayedRequestHead.apply {
writeCharSequence(key, Charsets.US_ASCII)
writeCharSequence(": ", Charsets.US_ASCII)
writeCharSequence(value, Charsets.UTF_8)
writeCharSequence("\r\n", Charsets.US_ASCII)
}
}
replayedRequestHead.writeCharSequence("\r\n", Charsets.US_ASCII)
ctx.writeAndFlush(replayedRequestHead)
}
is LastHttpContent -> {
ctx.writeAndFlush(msg)
}
is HttpContent -> ctx.writeAndFlush(msg)
else -> super.channelRead(ctx, msg)
}
}
override fun exceptionCaught(ctx: ChannelHandlerContext?, cause: Throwable?) {
super.exceptionCaught(ctx, cause)
}
}

View File

@@ -1,7 +1,7 @@
package net.woggioni.rbcs.server.throttling
import net.woggioni.rbcs.api.Configuration
import net.woggioni.jwo.Bucket
import net.woggioni.rbcs.api.Configuration
import java.net.InetSocketAddress
import java.util.Arrays
import java.util.concurrent.ConcurrentHashMap

View File

@@ -1,31 +1,32 @@
package net.woggioni.rbcs.server.throttling
import io.netty.channel.ChannelHandler.Sharable
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.ChannelInboundHandlerAdapter
import io.netty.handler.codec.http.DefaultFullHttpResponse
import io.netty.handler.codec.http.HttpContent
import io.netty.handler.codec.http.HttpHeaderNames
import io.netty.handler.codec.http.HttpRequest
import io.netty.handler.codec.http.HttpResponseStatus
import io.netty.handler.codec.http.HttpVersion
import net.woggioni.rbcs.api.Configuration
import net.woggioni.rbcs.common.contextLogger
import net.woggioni.rbcs.server.RemoteBuildCacheServer
import net.woggioni.jwo.Bucket
import net.woggioni.jwo.LongMath
import net.woggioni.rbcs.api.Configuration
import net.woggioni.rbcs.common.createLogger
import net.woggioni.rbcs.server.RemoteBuildCacheServer
import java.net.InetSocketAddress
import java.time.Duration
import java.time.temporal.ChronoUnit
import java.util.concurrent.TimeUnit
@Sharable
class ThrottlingHandler(cfg: Configuration) :
ChannelInboundHandlerAdapter() {
class ThrottlingHandler(private val bucketManager : BucketManager,
private val connectionConfiguration : Configuration.Connection) : ChannelInboundHandlerAdapter() {
private val log = contextLogger()
private val bucketManager = BucketManager.from(cfg)
private companion object {
private val log = createLogger<ThrottlingHandler>()
}
private val connectionConfiguration = cfg.connection
private var queuedContent : MutableList<HttpContent>? = null
/**
* If the suggested waiting time from the bucket is lower than this
@@ -38,7 +39,10 @@ class ThrottlingHandler(cfg: Configuration) :
connectionConfiguration.writeIdleTimeout
).dividedBy(2)
override fun channelRead(ctx: ChannelHandlerContext, msg: Any) {
if(msg is HttpRequest) {
val buckets = mutableListOf<Bucket>()
val user = ctx.channel().attr(RemoteBuildCacheServer.userAttribute).get()
if (user != null) {
@@ -54,13 +58,19 @@ class ThrottlingHandler(cfg: Configuration) :
bucketManager.getBucketByAddress(ctx.channel().remoteAddress() as InetSocketAddress)?.let(buckets::add)
}
if (buckets.isEmpty()) {
return super.channelRead(ctx, msg)
super.channelRead(ctx, msg)
} else {
handleBuckets(buckets, ctx, msg, true)
}
ctx.channel().id()
} else if(msg is HttpContent) {
queuedContent?.add(msg) ?: super.channelRead(ctx, msg)
} else {
super.channelRead(ctx, msg)
}
}
private fun handleBuckets(buckets : List<Bucket>, ctx : ChannelHandlerContext, msg : Any, delayResponse : Boolean) {
private fun handleBuckets(buckets: List<Bucket>, ctx: ChannelHandlerContext, msg: Any, delayResponse: Boolean) {
var nextAttempt = -1L
for (bucket in buckets) {
val bucketNextAttempt = bucket.removeTokensWithEstimate(1)
@@ -68,19 +78,27 @@ class ThrottlingHandler(cfg: Configuration) :
nextAttempt = bucketNextAttempt
}
}
if(nextAttempt < 0) {
if (nextAttempt < 0) {
super.channelRead(ctx, msg)
return
queuedContent?.let {
for(content in it) {
super.channelRead(ctx, content)
}
queuedContent = null
}
} else {
val waitDuration = Duration.of(LongMath.ceilDiv(nextAttempt, 100_000_000L) * 100L, ChronoUnit.MILLIS)
if (delayResponse && waitDuration < waitThreshold) {
this.queuedContent = mutableListOf()
ctx.executor().schedule({
handleBuckets(buckets, ctx, msg, false)
}, waitDuration.toMillis(), TimeUnit.MILLISECONDS)
} else {
this.queuedContent = null
sendThrottledResponse(ctx, waitDuration)
}
}
}
private fun sendThrottledResponse(ctx: ChannelHandlerContext, retryAfter: Duration) {
val response = DefaultFullHttpResponse(

View File

@@ -4,16 +4,5 @@
xmlns:rbcs="urn:net.woggioni.rbcs.server"
xs:schemaLocation="urn:net.woggioni.rbcs.server jpms://net.woggioni.rbcs.server/net/woggioni/rbcs/server/schema/rbcs.xsd">
<bind host="127.0.0.1" port="8080" incoming-connections-backlog-size="1024"/>
<connection
max-request-size="67108864"
idle-timeout="PT30S"
read-timeout="PT10S"
write-timeout="PT10S"
read-idle-timeout="PT60S"
write-idle-timeout="PT60S"/>
<event-executor use-virtual-threads="true"/>
<cache xs:type="rbcs:fileSystemCacheType" path="/tmp/rbcs" max-age="P7D"/>
<authentication>
<none/>
</authentication>
<cache xs:type="rbcs:fileSystemCacheType" path="${sys:java.io.tmpdir}/rbcs" max-age="P7D"/>
</rbcs:server>

View File

@@ -3,14 +3,27 @@
xmlns:xs="http://www.w3.org/2001/XMLSchema"
xmlns:rbcs="urn:net.woggioni.rbcs.server"
elementFormDefault="unqualified">
<xs:element name="server" type="rbcs:serverType"/>
<xs:element name="server" type="rbcs:serverType">
<xs:annotation>
<xs:documentation>
Root element containing the server configuration
</xs:documentation>
</xs:annotation>
</xs:element>
<xs:complexType name="serverType">
<xs:sequence minOccurs="0">
<xs:element name="bind" type="rbcs:bindType" maxOccurs="1"/>
<xs:element name="connection" type="rbcs:connectionType" minOccurs="0" maxOccurs="1"/>
<xs:element name="event-executor" type="rbcs:eventExecutorType" minOccurs="0" maxOccurs="1"/>
<xs:element name="cache" type="rbcs:cacheType" maxOccurs="1"/>
<xs:element name="cache" type="rbcs:cacheType" maxOccurs="1">
<xs:annotation>
<xs:documentation>
Cache storage backend implementation to use, more implementations can be added through
the use of plugins
</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="authorization" type="rbcs:authorizationType" minOccurs="0">
<xs:key name="userId">
<xs:selector xpath="users/user"/>
@@ -21,73 +34,279 @@
<xs:field xpath="@ref"/>
</xs:keyref>
</xs:element>
<xs:element name="authentication" type="rbcs:authenticationType" minOccurs="0" maxOccurs="1"/>
<xs:element name="tls" type="rbcs:tlsType" minOccurs="0" maxOccurs="1"/>
<xs:element name="authentication" type="rbcs:authenticationType" minOccurs="0" maxOccurs="1">
<xs:annotation>
<xs:documentation>
Mechanism to use to assign a username to a specific client
</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="tls" type="rbcs:tlsType" minOccurs="0" maxOccurs="1">
<xs:annotation>
<xs:documentation>
Use TLS to encrypt all the communications
</xs:documentation>
</xs:annotation>
</xs:element>
</xs:sequence>
<xs:attribute name="path" type="xs:string" use="optional"/>
<xs:attribute name="path" type="xs:string" use="optional">
<xs:annotation>
<xs:documentation>
URI path prefix, if your rbcs is hosted at "http://www.example.com"
and this parameter is set to "cache", then all the requests will need to be sent at
"http://www.example.com/cache/KEY", where "KEY" is the cache entry KEY
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:complexType>
<xs:complexType name="bindType">
<xs:attribute name="host" type="xs:token" use="required"/>
<xs:attribute name="port" type="xs:unsignedShort" use="required"/>
<xs:attribute name="incoming-connections-backlog-size" type="xs:unsignedInt" use="optional" default="1024"/>
<xs:attribute name="host" type="xs:token" use="required">
<xs:annotation>
<xs:documentation>Server bind address</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="port" type="xs:unsignedShort" use="required">
<xs:annotation>
<xs:documentation>Server port number</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="incoming-connections-backlog-size" type="xs:unsignedInt" use="optional" default="1024">
<xs:annotation>
<xs:documentation>
The maximum queue length for incoming connection indications (a request to connect) is set to
the backlog parameter. If a connection indication arrives when the queue is full,
the connection is refused.
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:complexType>
<xs:complexType name="connectionType">
<xs:attribute name="read-timeout" type="xs:duration" use="optional" default="PT0S"/>
<xs:attribute name="write-timeout" type="xs:duration" use="optional" default="PT0S"/>
<xs:attribute name="idle-timeout" type="xs:duration" use="optional" default="PT30S"/>
<xs:attribute name="read-idle-timeout" type="xs:duration" use="optional" default="PT60S"/>
<xs:attribute name="write-idle-timeout" type="xs:duration" use="optional" default="PT60S"/>
<xs:attribute name="max-request-size" type="xs:unsignedInt" use="optional" default="67108864"/>
<xs:attribute name="idle-timeout" type="xs:duration" use="optional" default="PT30S">
<xs:annotation>
<xs:documentation>
The server will close the connection with the client
when neither a read nor a write was performed for the specified period of time.
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="read-idle-timeout" type="xs:duration" use="optional" default="PT60S">
<xs:annotation>
<xs:documentation>
The server will close the connection with the client
when no read was performed for the specified period of time.
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="write-idle-timeout" type="xs:duration" use="optional" default="PT60S">
<xs:annotation>
<xs:documentation>
The server will close the connection with the client
when no write was performed for the specified period of time.
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="max-request-size" type="rbcs:byteSizeType" use="optional" default="0x4000000">
<xs:annotation>
<xs:documentation>
The maximum request body size the server will accept from a client
(if exceeded the server returns 413 HTTP status code)
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:complexType>
<xs:complexType name="eventExecutorType">
<xs:attribute name="use-virtual-threads" type="xs:boolean" use="optional" default="true"/>
<xs:attribute name="use-virtual-threads" type="xs:boolean" use="optional" default="true">
<xs:annotation>
<xs:documentation>
Whether or not to use virtual threads for the execution of the core server handler
(not for the I/O operations)
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:complexType>
<xs:complexType name="cacheType" abstract="true"/>
<xs:complexType name="inMemoryCacheType">
<xs:annotation>
<xs:documentation>
A simple cache implementation that uses a java.util.ConcurrentHashMap as a storage backend
</xs:documentation>
</xs:annotation>
<xs:complexContent>
<xs:extension base="rbcs:cacheType">
<xs:attribute name="max-age" type="xs:duration" default="P1D"/>
<xs:attribute name="max-size" type="xs:token" default="0x1000000"/>
<xs:attribute name="digest" type="xs:token" default="MD5"/>
<xs:attribute name="enable-compression" type="xs:boolean" default="true"/>
<xs:attribute name="compression-level" type="xs:byte" default="-1"/>
<xs:attribute name="max-age" type="xs:duration" default="P1D">
<xs:annotation>
<xs:documentation>
Values will be removed from the cache after this amount of time
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="max-size" type="rbcs:byteSizeType" default="0x1000000">
<xs:annotation>
<xs:documentation>
The maximum allowed total size of the cache in bytes, old values will be purged from the cache
when the insertion of a new value causes this limit to be exceeded
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="digest" type="xs:token">
<xs:annotation>
<xs:documentation>
Hashing algorithm to apply to the key. If omitted, no hashing is performed.
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="enable-compression" type="xs:boolean" default="true">
<xs:annotation>
<xs:documentation>
Enable deflate compression for stored cache elements
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="compression-level" type="rbcs:compressionLevelType" default="-1">
<xs:annotation>
<xs:documentation>
Deflate compression level to use for cache compression,
use -1 to use the default compression level of java.util.zip.Deflater
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="chunk-size" type="rbcs:byteSizeType" default="0x10000">
<xs:annotation>
<xs:documentation>
Maximum byte size of socket write calls
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:extension>
</xs:complexContent>
</xs:complexType>
<xs:complexType name="fileSystemCacheType">
<xs:annotation>
<xs:documentation>
A simple cache implementation that stores data in a folder on the filesystem
</xs:documentation>
</xs:annotation>
<xs:complexContent>
<xs:extension base="rbcs:cacheType">
<xs:attribute name="path" type="xs:string" use="required"/>
<xs:attribute name="max-age" type="xs:duration" default="P1D"/>
<xs:attribute name="digest" type="xs:token" default="MD5"/>
<xs:attribute name="enable-compression" type="xs:boolean" default="true"/>
<xs:attribute name="compression-level" type="xs:byte" default="-1"/>
<xs:attribute name="path" type="xs:string" use="optional">
<xs:annotation>
<xs:documentation>
File system path that will be used to store the cache data files
(it will be created if it doesn't already exist)
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="max-age" type="xs:duration" default="P1D">
<xs:annotation>
<xs:documentation>
Values will be removed from the cache after this amount of time
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="digest" type="xs:token" default="SHA3-224">
<xs:annotation>
<xs:documentation>
Hashing algorithm to apply to the key. If omitted, no hashing is performed.
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="enable-compression" type="xs:boolean" default="true">
<xs:annotation>
<xs:documentation>
Enable deflate compression for stored cache elements
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="compression-level" type="rbcs:compressionLevelType" default="-1">
<xs:annotation>
<xs:documentation>
Deflate compression level to use for cache compression,
use -1 to use the default compression level of java.util.zip.Deflater
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="chunk-size" type="rbcs:byteSizeType" default="0x10000">
<xs:annotation>
<xs:documentation>
Maximum byte size of a cache value that will be stored in memory
(reduce it to reduce memory consumption, increase it for increased throughput)
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:extension>
</xs:complexContent>
</xs:complexType>
<xs:complexType name="tlsCertificateAuthorizationType">
<xs:sequence>
<xs:element name="group-extractor" type="rbcs:X500NameExtractorType" minOccurs="0"/>
<xs:element name="user-extractor" type="rbcs:X500NameExtractorType" minOccurs="0"/>
<xs:element name="group-extractor" type="rbcs:X500NameExtractorType" minOccurs="0">
<xs:annotation>
<xs:documentation>
A regex based extractor that will be used to determine which group the client belongs to,
based on the X.500 name of the subject field in the client's TLS certificate.
When this is set RBAC works even if the user isn't listed in the &lt;users/&gt; section as
the client will be assigned role solely based on the group he is found to belong to.
Note that this does not allow for a client to be part of multiple groups.
</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="user-extractor" type="rbcs:X500NameExtractorType" minOccurs="0">
<xs:annotation>
<xs:documentation>
A regex based extractor that will be used to assign a user to a connected client,
based on the X.500 name of the subject field in the client's TLS certificate.
</xs:documentation>
</xs:annotation>
</xs:element>
</xs:sequence>
</xs:complexType>
<xs:complexType name="X500NameExtractorType">
<xs:attribute name="attribute-name" type="xs:token"/>
<xs:attribute name="pattern" type="xs:token"/>
<xs:annotation>
<xs:documentation>
Extract informations from a client TLS certificates using
regular expressions applied to the X.500 name "Subject" field
</xs:documentation>
</xs:annotation>
<xs:attribute name="attribute-name" type="xs:token">
<xs:annotation>
<xs:documentation>
X.500 name attribute to apply the regex
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="pattern" type="xs:token">
<xs:annotation>
<xs:documentation>
Regex that wil be applied to the attribute value,
use regex groups to extract relevant data
(note that only the first group that appears in the regex is used)
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:complexType>
<xs:complexType name="authorizationType">
<xs:all>
<xs:element name="users" type="rbcs:usersType"/>
<xs:element name="users" type="rbcs:usersType">
<xs:annotation>
<xs:documentation>
List of users registered in the application
</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="groups" type="rbcs:groupsType">
<xs:annotation>
<xs:documentation>
List of user groups registered in the application
</xs:documentation>
</xs:annotation>
<xs:unique name="groupKey">
<xs:selector xpath="group"/>
<xs:field xpath="@name"/>
@@ -97,35 +316,127 @@
</xs:complexType>
<xs:complexType name="authenticationType">
<xs:annotation>
<xs:documentation>
Authentication mechanism to assign usernames and groups to clients
</xs:documentation>
</xs:annotation>
<xs:choice>
<xs:element name="basic"/>
<xs:element name="client-certificate" type="rbcs:tlsCertificateAuthorizationType"/>
<xs:element name="none"/>
<xs:element name="basic">
<xs:annotation>
<xs:documentation>
Enable HTTP basic authentication
</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="client-certificate" type="rbcs:tlsCertificateAuthorizationType">
<xs:annotation>
<xs:documentation>
Enable TLS certificate authentication
</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="none">
<xs:annotation>
<xs:documentation>
Disable authentication altogether
</xs:documentation>
</xs:annotation>
</xs:element>
</xs:choice>
</xs:complexType>
<xs:complexType name="quotaType">
<xs:attribute name="calls" type="xs:positiveInteger" use="required"/>
<xs:attribute name="period" type="xs:duration" use="required"/>
<xs:attribute name="max-available-calls" type="xs:positiveInteger" use="optional"/>
<xs:attribute name="initial-available-calls" type="xs:unsignedInt" use="optional"/>
<xs:annotation>
<xs:documentation>
Defines a quota for a user or a group
</xs:documentation>
</xs:annotation>
<xs:attribute name="calls" type="xs:positiveInteger" use="required">
<xs:annotation>
<xs:documentation>
Maximum number of allowed calls in a given period
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="period" type="xs:duration" use="required">
<xs:annotation>
<xs:documentation>
The period length
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="max-available-calls" type="xs:positiveInteger" use="optional">
<xs:annotation>
<xs:documentation>
Maximum number of available calls that can be accumulated
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="initial-available-calls" type="xs:unsignedInt" use="optional">
<xs:annotation>
<xs:documentation>
Number of available calls for users at their first call
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:complexType>
<xs:complexType name="anonymousUserType">
<xs:annotation>
<xs:documentation>
Placeholder for a client that is not authenticated
</xs:documentation>
</xs:annotation>
<xs:sequence>
<xs:element name="quota" type="rbcs:quotaType" minOccurs="0" maxOccurs="1"/>
<xs:element name="quota" type="rbcs:quotaType" minOccurs="0" maxOccurs="1">
<xs:annotation>
<xs:documentation>
Calls quota for the user
</xs:documentation>
</xs:annotation>
</xs:element>
</xs:sequence>
</xs:complexType>
<xs:complexType name="userType">
<xs:annotation>
<xs:documentation>
An authenticated user
</xs:documentation>
</xs:annotation>
<xs:sequence>
<xs:element name="quota" type="rbcs:quotaType" minOccurs="0" maxOccurs="1"/>
<xs:element name="quota" type="rbcs:quotaType" minOccurs="0" maxOccurs="1">
<xs:annotation>
<xs:documentation>
Calls quota for the user
</xs:documentation>
</xs:annotation>
</xs:element>
</xs:sequence>
<xs:attribute name="name" type="xs:token" use="required"/>
<xs:attribute name="password" type="xs:string" use="optional"/>
<xs:attribute name="name" type="xs:token" use="required">
<xs:annotation>
<xs:documentation>
User's name
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="password" type="xs:string" use="optional">
<xs:annotation>
<xs:documentation>
User's password used in HTTP basic authentication
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:complexType>
<xs:complexType name="usersType">
<xs:annotation>
<xs:documentation>
List of registered users, add an &lt;anonymous&gt; tag to enable authenticated user access
when authentication is enabled
</xs:documentation>
</xs:annotation>
<xs:sequence>
<xs:element name="user" type="rbcs:userType" minOccurs="0" maxOccurs="unbounded"/>
<xs:element name="anonymous" type="rbcs:anonymousUserType" minOccurs="0" maxOccurs="1"/>
@@ -133,12 +444,22 @@
</xs:complexType>
<xs:complexType name="groupsType">
<xs:annotation>
<xs:documentation>
List of registered user groups
</xs:documentation>
</xs:annotation>
<xs:sequence>
<xs:element name="group" type="rbcs:groupType" maxOccurs="unbounded" minOccurs="0"/>
</xs:sequence>
</xs:complexType>
<xs:complexType name="groupType">
<xs:annotation>
<xs:documentation>
The definition of a user group, with the list of its member users
</xs:documentation>
</xs:annotation>
<xs:sequence>
<xs:element name="users" type="rbcs:userRefsType" maxOccurs="1" minOccurs="0">
<xs:unique name="userRefWriterKey">
@@ -146,11 +467,35 @@
<xs:field xpath="@ref"/>
</xs:unique>
</xs:element>
<xs:element name="roles" type="rbcs:rolesType" maxOccurs="1" minOccurs="0"/>
<xs:element name="user-quota" type="rbcs:quotaType" minOccurs="0" maxOccurs="1"/>
<xs:element name="group-quota" type="rbcs:quotaType" minOccurs="0" maxOccurs="1"/>
<xs:element name="roles" type="rbcs:rolesType" maxOccurs="1" minOccurs="0">
<xs:annotation>
<xs:documentation>
The list of application roles awarded to all the members of this group
</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="user-quota" type="rbcs:quotaType" minOccurs="0" maxOccurs="1">
<xs:annotation>
<xs:documentation>
The call quota for each user in this group
</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="group-quota" type="rbcs:quotaType" minOccurs="0" maxOccurs="1">
<xs:annotation>
<xs:documentation>
The cumulative call quota for all users in this group
</xs:documentation>
</xs:annotation>
</xs:element>
</xs:sequence>
<xs:attribute name="name" type="xs:token"/>
<xs:attribute name="name" type="xs:token">
<xs:annotation>
<xs:documentation>
The group's name
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:complexType>
<xs:simpleType name="role" final="restriction" >
@@ -170,6 +515,11 @@
</xs:complexType>
<xs:complexType name="userRefsType">
<xs:annotation>
<xs:documentation>
A list of references to users in the &lt;users&gt; section
</xs:documentation>
</xs:annotation>
<xs:sequence>
<xs:element name="user" type="rbcs:userRefType" maxOccurs="unbounded" minOccurs="0"/>
<xs:element name="anonymous" minOccurs="0" maxOccurs="1"/>
@@ -177,28 +527,106 @@
</xs:complexType>
<xs:complexType name="userRefType">
<xs:attribute name="ref" type="xs:string" use="required"/>
<xs:annotation>
<xs:documentation>
A reference to a user in the &lt;users&gt; section
</xs:documentation>
</xs:annotation>
<xs:attribute name="ref" type="xs:string" use="required">
<xs:annotation>
<xs:documentation>
Name of the referenced user
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:complexType>
<xs:complexType name="tlsType">
<xs:annotation>
<xs:documentation>
Enable TLS protocol
</xs:documentation>
</xs:annotation>
<xs:all>
<xs:element name="keystore" type="rbcs:keyStoreType" />
<xs:element name="truststore" type="rbcs:trustStoreType" minOccurs="0"/>
<xs:element name="keystore" type="rbcs:keyStoreType" >
<xs:annotation>
<xs:documentation>
Path to the keystore file that contains the server's key and certificate
</xs:documentation>
</xs:annotation>
</xs:element>
<xs:element name="truststore" type="rbcs:trustStoreType" minOccurs="0">
<xs:annotation>
<xs:documentation>
Path to the truststore file that contains the trusted CAs
for TLS client certificate verification
</xs:documentation>
</xs:annotation>
</xs:element>
</xs:all>
</xs:complexType>
<xs:complexType name="keyStoreType">
<xs:attribute name="file" type="xs:string" use="required"/>
<xs:attribute name="password" type="xs:string"/>
<xs:attribute name="key-alias" type="xs:string" use="required"/>
<xs:attribute name="key-password" type="xs:string"/>
<xs:attribute name="file" type="xs:string" use="required">
<xs:annotation>
<xs:documentation>
System path to the keystore file
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="password" type="xs:string">
<xs:annotation>
<xs:documentation>
Password to open they keystore file
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="key-alias" type="xs:string" use="required">
<xs:annotation>
<xs:documentation>
Alias of the keystore entry containing the private key
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="key-password" type="xs:string">
<xs:annotation>
<xs:documentation>
Private key entry's encryption password
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:complexType>
<xs:complexType name="trustStoreType">
<xs:attribute name="file" type="xs:string" use="required"/>
<xs:attribute name="password" type="xs:string"/>
<xs:attribute name="check-certificate-status" type="xs:boolean"/>
<xs:attribute name="require-client-certificate" type="xs:boolean" use="optional" default="false"/>
<xs:attribute name="file" type="xs:string" use="required">
<xs:annotation>
<xs:documentation>
Path to the trustore file
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="password" type="xs:string">
<xs:annotation>
<xs:documentation>
Trustore file password
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="check-certificate-status" type="xs:boolean">
<xs:annotation>
<xs:documentation>
Whether or not check the certificate validity using CRL/OCSP
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="require-client-certificate" type="xs:boolean" use="optional" default="false">
<xs:annotation>
<xs:documentation>
If true, the server requires a TLS client certificate from the client and simply refuses to connect
when a client certificate isn't provided
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:complexType>
<xs:complexType name="propertiesType">
@@ -220,5 +648,17 @@
<xs:attribute name="port" type="xs:unsignedShort" use="required"/>
</xs:complexType>
<xs:simpleType name="byteSizeType">
<xs:restriction base="xs:token">
<xs:pattern value="(0x[a-f0-9]+|[0-9]+)"/>
</xs:restriction>
</xs:simpleType>
<xs:simpleType name="compressionLevelType">
<xs:restriction base="xs:integer">
<xs:minInclusive value="-1"/>
<xs:maxInclusive value="9"/>
</xs:restriction>
</xs:simpleType>
</xs:schema>

View File

@@ -1,30 +0,0 @@
package net.woggioni.rbcs.server.test.utils;
import net.woggioni.jwo.JWO;
import java.io.IOException;
import java.net.InetAddress;
import java.net.ServerSocket;
public class NetworkUtils {
private static final int MAX_ATTEMPTS = 50;
public static int getFreePort() {
int count = 0;
while(count < MAX_ATTEMPTS) {
try (ServerSocket serverSocket = new ServerSocket(0, 50, InetAddress.getLocalHost())) {
final var candidate = serverSocket.getLocalPort();
if (candidate > 0) {
return candidate;
} else {
JWO.newThrowable(RuntimeException.class, "Got invalid port number: %d", candidate);
throw new RuntimeException("Error trying to find an open port");
}
} catch (IOException ignored) {
++count;
}
}
throw new RuntimeException("Error trying to find an open port");
}
}

View File

@@ -2,10 +2,10 @@ package net.woggioni.rbcs.server.test
import net.woggioni.rbcs.api.Configuration
import net.woggioni.rbcs.api.Role
import net.woggioni.rbcs.common.RBCS.getFreePort
import net.woggioni.rbcs.common.Xml
import net.woggioni.rbcs.server.cache.FileSystemCacheConfiguration
import net.woggioni.rbcs.server.configuration.Serializer
import net.woggioni.rbcs.server.test.utils.NetworkUtils
import java.net.URI
import java.net.http.HttpRequest
import java.nio.charset.StandardCharsets
@@ -33,13 +33,11 @@ abstract class AbstractBasicAuthServerTest : AbstractServerTest() {
this.cacheDir = testDir.resolve("cache")
cfg = Configuration.of(
"127.0.0.1",
NetworkUtils.getFreePort(),
getFreePort(),
50,
serverPath,
Configuration.EventExecutor(false),
Configuration.Connection(
Duration.of(10, ChronoUnit.SECONDS),
Duration.of(10, ChronoUnit.SECONDS),
Duration.of(60, ChronoUnit.SECONDS),
Duration.of(30, ChronoUnit.SECONDS),
Duration.of(30, ChronoUnit.SECONDS),
@@ -47,11 +45,13 @@ abstract class AbstractBasicAuthServerTest : AbstractServerTest() {
),
users.asSequence().map { it.name to it}.toMap(),
sequenceOf(writersGroup, readersGroup).map { it.name to it}.toMap(),
FileSystemCacheConfiguration(this.cacheDir,
FileSystemCacheConfiguration(
this.cacheDir,
maxAge = Duration.ofSeconds(3600 * 24),
digestAlgorithm = "MD5",
compressionLevel = Deflater.DEFAULT_COMPRESSION,
compressionEnabled = false
compressionEnabled = false,
chunkSize = 0x1000
),
Configuration.BasicAuthentication(),
null,

View File

@@ -43,8 +43,9 @@ abstract class AbstractServerTest {
}
private fun stopServer() {
this.serverHandle?.use {
it.shutdown()
this.serverHandle?.let {
it.sendShutdownSignal()
it.get()
}
}
}

View File

@@ -2,12 +2,12 @@ package net.woggioni.rbcs.server.test
import net.woggioni.rbcs.api.Configuration
import net.woggioni.rbcs.api.Role
import net.woggioni.rbcs.common.RBCS.getFreePort
import net.woggioni.rbcs.common.Xml
import net.woggioni.rbcs.server.cache.FileSystemCacheConfiguration
import net.woggioni.rbcs.server.configuration.Serializer
import net.woggioni.rbcs.server.test.utils.CertificateUtils
import net.woggioni.rbcs.server.test.utils.CertificateUtils.X509Credentials
import net.woggioni.rbcs.server.test.utils.NetworkUtils
import org.bouncycastle.asn1.x500.X500Name
import java.net.URI
import java.net.http.HttpClient
@@ -138,13 +138,11 @@ abstract class AbstractTlsServerTest : AbstractServerTest() {
createKeyStoreAndTrustStore()
cfg = Configuration(
"127.0.0.1",
NetworkUtils.getFreePort(),
getFreePort(),
100,
serverPath,
Configuration.EventExecutor(false),
Configuration.Connection(
Duration.of(10, ChronoUnit.SECONDS),
Duration.of(10, ChronoUnit.SECONDS),
Duration.of(60, ChronoUnit.SECONDS),
Duration.of(30, ChronoUnit.SECONDS),
Duration.of(30, ChronoUnit.SECONDS),
@@ -154,9 +152,10 @@ abstract class AbstractTlsServerTest : AbstractServerTest() {
sequenceOf(writersGroup, readersGroup).map { it.name to it }.toMap(),
FileSystemCacheConfiguration(this.cacheDir,
maxAge = Duration.ofSeconds(3600 * 24),
compressionEnabled = true,
compressionEnabled = false,
compressionLevel = Deflater.DEFAULT_COMPRESSION,
digestAlgorithm = "MD5"
digestAlgorithm = "MD5",
chunkSize = 0x1000
),
// InMemoryCacheConfiguration(
// maxAge = Duration.ofSeconds(3600 * 24),

View File

@@ -86,7 +86,7 @@ class BasicAuthServerTest : AbstractBasicAuthServerTest() {
@Test
@Order(4)
fun putAsAWriterUser() {
val client: HttpClient = HttpClient.newHttpClient()
val client: HttpClient = HttpClient.newBuilder().version(HttpClient.Version.HTTP_1_1).build()
val (key, value) = keyValuePair
val user = cfg.users.values.find {

View File

@@ -2,10 +2,10 @@ package net.woggioni.rbcs.server.test
import io.netty.handler.codec.http.HttpResponseStatus
import net.woggioni.rbcs.api.Configuration
import net.woggioni.rbcs.common.RBCS.getFreePort
import net.woggioni.rbcs.common.Xml
import net.woggioni.rbcs.server.cache.InMemoryCacheConfiguration
import net.woggioni.rbcs.server.configuration.Serializer
import net.woggioni.rbcs.server.test.utils.NetworkUtils
import org.junit.jupiter.api.Assertions
import org.junit.jupiter.api.Order
import org.junit.jupiter.api.Test
@@ -33,13 +33,11 @@ class NoAuthServerTest : AbstractServerTest() {
this.cacheDir = testDir.resolve("cache")
cfg = Configuration(
"127.0.0.1",
NetworkUtils.getFreePort(),
getFreePort(),
100,
serverPath,
Configuration.EventExecutor(false),
Configuration.Connection(
Duration.of(10, ChronoUnit.SECONDS),
Duration.of(10, ChronoUnit.SECONDS),
Duration.of(60, ChronoUnit.SECONDS),
Duration.of(30, ChronoUnit.SECONDS),
Duration.of(30, ChronoUnit.SECONDS),
@@ -52,7 +50,8 @@ class NoAuthServerTest : AbstractServerTest() {
compressionEnabled = true,
digestAlgorithm = "MD5",
compressionLevel = Deflater.DEFAULT_COMPRESSION,
maxSize = 0x1000000
maxSize = 0x1000000,
chunkSize = 0x1000
),
null,
null,
@@ -80,7 +79,7 @@ class NoAuthServerTest : AbstractServerTest() {
@Test
@Order(1)
fun putWithNoAuthorizationHeader() {
val client: HttpClient = HttpClient.newHttpClient()
val client: HttpClient = HttpClient.newBuilder().version(HttpClient.Version.HTTP_1_1).build()
val (key, value) = keyValuePair
val requestBuilder = newRequestBuilder(key)
@@ -119,6 +118,56 @@ class NoAuthServerTest : AbstractServerTest() {
@Test
@Order(4)
fun getUnhandledPath() {
val client: HttpClient = HttpClient.newHttpClient()
val (key, _) = newEntry(random)
val requestBuilder = HttpRequest.newBuilder()
.uri(URI.create("http://${cfg.host}:${cfg.port}/some/other/path/$key"))
val response: HttpResponse<ByteArray> =
client.send(requestBuilder.build(), HttpResponse.BodyHandlers.ofByteArray())
Assertions.assertEquals(HttpResponseStatus.BAD_REQUEST.code(), response.statusCode())
}
@Test
@Order(5)
fun putUnhandledPath() {
val client: HttpClient = HttpClient.newHttpClient()
val (key, value) = newEntry(random)
val requestBuilder = HttpRequest.newBuilder()
.uri(URI.create("http://${cfg.host}:${cfg.port}/some/other/path/$key"))
.PUT(HttpRequest.BodyPublishers.ofByteArray(value))
val response: HttpResponse<ByteArray> =
client.send(requestBuilder.build(), HttpResponse.BodyHandlers.ofByteArray())
Assertions.assertEquals(HttpResponseStatus.BAD_REQUEST.code(), response.statusCode())
}
@Test
@Order(6)
fun getRelativeUnhandledPath() {
val client: HttpClient = HttpClient.newHttpClient()
val (key, _) = newEntry(random)
val requestBuilder = HttpRequest.newBuilder()
.uri(URI.create("http://${cfg.host}:${cfg.port}/some/nested/path/../../../some/other/path/$key"))
val response: HttpResponse<ByteArray> =
client.send(requestBuilder.build(), HttpResponse.BodyHandlers.ofByteArray())
Assertions.assertEquals(HttpResponseStatus.BAD_REQUEST.code(), response.statusCode())
}
@Test
@Order(7)
fun getRelativePath() {
val client: HttpClient = HttpClient.newHttpClient()
val (key, value) = keyValuePair
val requestBuilder = HttpRequest.newBuilder()
.uri(URI.create("http://${cfg.host}:${cfg.port}/some/other/path/../../nested/path/$key"))
val response: HttpResponse<ByteArray> =
client.send(requestBuilder.build(), HttpResponse.BodyHandlers.ofByteArray())
Assertions.assertEquals(HttpResponseStatus.OK.code(), response.statusCode())
Assertions.assertArrayEquals(value, response.body())
}
@Test
@Order(10)
fun traceTest() {
val client: HttpClient = HttpClient.newBuilder().version(HttpClient.Version.HTTP_1_1).build()
val requestBuilder = newRequestBuilder("").method(

View File

@@ -4,14 +4,12 @@
xs:schemaLocation="urn:net.woggioni.rbcs.server jpms://net.woggioni.rbcs.server/net/woggioni/rbcs/server/schema/rbcs.xsd">
<bind host="127.0.0.1" port="11443" incoming-connections-backlog-size="22"/>
<connection
write-timeout="PT25M"
read-timeout="PT20M"
read-idle-timeout="PT10M"
write-idle-timeout="PT11M"
idle-timeout="PT30M"
max-request-size="101325"/>
<event-executor use-virtual-threads="false"/>
<cache xs:type="rbcs:fileSystemCacheType" path="/tmp/rbcs" max-age="P7D"/>
<cache xs:type="rbcs:fileSystemCacheType" path="/tmp/rbcs" max-age="P7D" chunk-size="0xa910"/>
<authentication>
<none/>
</authentication>

View File

@@ -9,11 +9,9 @@
max-request-size="67108864"
idle-timeout="PT30S"
read-idle-timeout="PT60S"
write-idle-timeout="PT60S"
read-timeout="PT5M"
write-timeout="PT5M"/>
write-idle-timeout="PT60S"/>
<event-executor use-virtual-threads="true"/>
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" max-size="16777216" compression-mode="deflate">
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" chunk-size="123">
<server host="memcached" port="11211"/>
</cache>
<authorization>

View File

@@ -5,14 +5,12 @@
xs:schemaLocation="urn:net.woggioni.rbcs.server.memcache jpms://net.woggioni.rbcs.server.memcache/net/woggioni/rbcs/server/memcache/schema/rbcs-memcache.xsd urn:net.woggioni.rbcs.server jpms://net.woggioni.rbcs.server/net/woggioni/rbcs/server/schema/rbcs.xsd">
<bind host="127.0.0.1" port="11443" incoming-connections-backlog-size="50"/>
<connection
write-timeout="PT25M"
read-timeout="PT20M"
read-idle-timeout="PT10M"
write-idle-timeout="PT11M"
idle-timeout="PT30M"
max-request-size="101325"/>
<event-executor use-virtual-threads="false"/>
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" max-size="101325" digest="SHA-256">
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" digest="SHA-256" chunk-size="456" compression-mode="deflate" compression-level="7">
<server host="127.0.0.1" port="11211" max-connections="10" connection-timeout="PT20S"/>
</cache>
<authentication>

View File

@@ -4,14 +4,12 @@
xs:schemaLocation="urn:net.woggioni.rbcs.server jpms://net.woggioni.rbcs.server/net/woggioni/rbcs/server/schema/rbcs.xsd">
<bind host="127.0.0.1" port="11443" incoming-connections-backlog-size="180"/>
<connection
write-timeout="PT25M"
read-timeout="PT20M"
read-idle-timeout="PT10M"
write-idle-timeout="PT11M"
idle-timeout="PT30M"
max-request-size="4096"/>
<event-executor use-virtual-threads="false"/>
<cache xs:type="rbcs:inMemoryCacheType" max-age="P7D"/>
<cache xs:type="rbcs:inMemoryCacheType" max-age="P7D" chunk-size="0xa91f"/>
<authorization>
<users>
<user name="user1" password="password1">

3
rbcs-servlet/Dockerfile Normal file
View File

@@ -0,0 +1,3 @@
FROM tomcat:jdk21
COPY ./rbcs-servlet-*.war /usr/local/tomcat/webapps/rbcs-servlet.war

28
rbcs-servlet/README.md Normal file
View File

@@ -0,0 +1,28 @@
## How to run
```bash
gradlew dockerBuildImage
```
then in this directory run
```bash
docker run --rm -p 127.0.0.1:8080:8080 -m 1G --name tomcat -v $(pwd)/conf/server.xml:/usr/local/tomcat/conf/server.xml gitea.woggioni.net/woggioni/rbcs/servlet:latest
```
you can call the servlet cache with this RBCS client profile
```xml
<profile name="servlet" base-url="http://127.0.0.1:8080/rbcs-servlet/cache/" max-connections="100" enable-compression="false">
<no-auth/>
<connection
idle-timeout="PT5S"
read-idle-timeout="PT10S"
write-idle-timeout="PT10S"
read-timeout="PT5S"
write-timeout="PT5S"/>
<retry-policy max-attempts="10" initial-delay="PT1S" exp="1.2"/>
</profile>
```
## Notes
The servlet implementation has an in memory cache whose maximum
size is hardcoded to 0x8000000 bytes (around 134 MB)

33
rbcs-servlet/build.gradle Normal file
View File

@@ -0,0 +1,33 @@
plugins {
alias(catalog.plugins.kotlin.jvm)
alias(catalog.plugins.gradle.docker)
id 'war'
}
import com.bmuschko.gradle.docker.tasks.image.DockerBuildImage
dependencies {
compileOnly catalog.jakarta.servlet.api
compileOnly catalog.jakarta.enterprise.cdi.api
implementation catalog.jwo
implementation catalog.jakarta.el
implementation catalog.jakarta.cdi.el.api
implementation catalog.weld.servlet.core
implementation catalog.weld.web
}
Provider<Copy> prepareDockerBuild = tasks.register('prepareDockerBuild', Copy) {
group = 'docker'
into project.layout.buildDirectory.file('docker')
from(tasks.war)
from(file('Dockerfile'))
}
Provider<DockerBuildImage> dockerBuild = tasks.register('dockerBuildImage', DockerBuildImage) {
group = 'docker'
dependsOn(prepareDockerBuild)
images.add('gitea.woggioni.net/woggioni/rbcs/servlet:latest')
images.add("gitea.woggioni.net/woggioni/rbcs/servlet:${version}")
}

View File

@@ -0,0 +1,140 @@
<?xml version="1.0" encoding="UTF-8"?>
<!-- Note: A "Server" is not itself a "Container", so you may not
define subcomponents such as "Valves" at this level.
Documentation at /docs/config/server.html
-->
<Server port="8005" shutdown="SHUTDOWN">
<Listener className="org.apache.catalina.startup.VersionLoggerListener" />
<!-- Security listener. Documentation at /docs/config/listeners.html
<Listener className="org.apache.catalina.security.SecurityListener" />
-->
<!-- OpenSSL support using Tomcat Native -->
<Listener className="org.apache.catalina.core.AprLifecycleListener" />
<!-- OpenSSL support using FFM API from Java 22 -->
<!-- <Listener className="org.apache.catalina.core.OpenSSLLifecycleListener" /> -->
<!-- Prevent memory leaks due to use of particular java/javax APIs-->
<Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
<Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />
<!-- Global JNDI resources
Documentation at /docs/jndi-resources-howto.html
-->
<GlobalNamingResources>
<!-- Editable user database that can also be used by
UserDatabaseRealm to authenticate users
-->
<Resource name="UserDatabase" auth="Container"
type="org.apache.catalina.UserDatabase"
description="User database that can be updated and saved"
factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
pathname="conf/tomcat-users.xml" />
</GlobalNamingResources>
<!-- A "Service" is a collection of one or more "Connectors" that share
a single "Container" Note: A "Service" is not itself a "Container",
so you may not define subcomponents such as "Valves" at this level.
Documentation at /docs/config/service.html
-->
<Service name="Catalina">
<!--The connectors can use a shared executor, you can define one or more named thread pools-->
<!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/>-->
<Executor name="tomcatThreadPool" namePrefix="virtual-exec-" className="org.apache.catalina.core.StandardVirtualThreadExecutor"/>
<!-- A "Connector" represents an endpoint by which requests are received
and responses are returned. Documentation at :
HTTP Connector: /docs/config/http.html
AJP Connector: /docs/config/ajp.html
Define a non-SSL/TLS HTTP/1.1 Connector on port 8080
-->
<!-- <Connector port="8080" protocol="HTTP/1.1" executor="tomcatThreadPool"-->
<!-- connectionTimeout="20000"-->
<!-- redirectPort="8443" />-->
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
<!-- A "Connector" using the shared thread pool-->
<!--
<Connector executor="tomcatThreadPool"
port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
-->
<!-- Define an SSL/TLS HTTP/1.1 Connector on port 8443 with HTTP/2
This connector uses the NIO implementation. The default
SSLImplementation will depend on the presence of the APR/native
library and the useOpenSSL attribute of the AprLifecycleListener.
Either JSSE or OpenSSL style configuration may be used regardless of
the SSLImplementation selected. JSSE style configuration is used below.
-->
<!--
<Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol"
maxThreads="150" SSLEnabled="true">
<UpgradeProtocol className="org.apache.coyote.http2.Http2Protocol" />
<SSLHostConfig>
<Certificate certificateKeystoreFile="conf/localhost-rsa.jks"
certificateKeystorePassword="changeit" type="RSA" />
</SSLHostConfig>
</Connector>
-->
<!-- Define an AJP 1.3 Connector on port 8009 -->
<!--
<Connector protocol="AJP/1.3"
address="::1"
port="8009"
redirectPort="8443" />
-->
<!-- An Engine represents the entry point (within Catalina) that processes
every request. The Engine implementation for Tomcat stand alone
analyzes the HTTP headers included with the request, and passes them
on to the appropriate Host (virtual host).
Documentation at /docs/config/engine.html -->
<!-- You should set jvmRoute to support load-balancing via AJP ie :
<Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">
-->
<Engine name="Catalina" defaultHost="localhost">
<!--For clustering, please take a look at documentation at:
/docs/cluster-howto.html (simple how to)
/docs/config/cluster.html (reference documentation) -->
<!--
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
-->
<!-- Use the LockOutRealm to prevent attempts to guess user passwords
via a brute-force attack -->
<Realm className="org.apache.catalina.realm.LockOutRealm">
<!-- This Realm uses the UserDatabase configured in the global JNDI
resources under the key "UserDatabase". Any edits
that are performed against this UserDatabase are immediately
available for use by the Realm. -->
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
</Realm>
<Host name="localhost" appBase="webapps"
unpackWARs="true" autoDeploy="true">
<!-- SingleSignOn valve, share authentication between web applications
Documentation at: /docs/config/valve.html -->
<!--
<Valve className="org.apache.catalina.authenticator.SingleSignOn" />
-->
<!-- Access log processes all example.
Documentation at: /docs/config/valve.html
Note: The pattern used is equivalent to using pattern="common" -->
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log" suffix=".txt"
pattern="%h %l %u %t &quot;%r&quot; %s %b" />
</Host>
</Engine>
</Service>
</Server>

View File

@@ -0,0 +1,58 @@
<?xml version="1.0" encoding="UTF-8"?>
<!--
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->
<tomcat-users xmlns="http://tomcat.apache.org/xml"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-users.xsd"
version="1.0">
<!--
By default, no user is included in the "manager-gui" role required
to operate the "/manager/html" web application. If you wish to use this app,
you must define such a user - the username and password are arbitrary.
Built-in Tomcat manager roles:
- manager-gui - allows access to the HTML GUI and the status pages
- manager-script - allows access to the HTTP API and the status pages
- manager-jmx - allows access to the JMX proxy and the status pages
- manager-status - allows access to the status pages only
The users below are wrapped in a comment and are therefore ignored. If you
wish to configure one or more of these users for use with the manager web
application, do not forget to remove the <!.. ..> that surrounds them. You
will also need to set the passwords to something appropriate.
-->
<!--
<user username="admin" password="<must-be-changed>" roles="manager-gui"/>
<user username="robot" password="<must-be-changed>" roles="manager-script"/>
-->
<user username="luser" password="password" roles="manager-gui,admin-gui"/>
<!--
The sample user and role entries below are intended for use with the
examples web application. They are wrapped in a comment and thus are ignored
when reading this file. If you wish to configure these users for use with the
examples web application, do not forget to remove the <!.. ..> that surrounds
them. You will also need to set the passwords to something appropriate.
-->
<!--
<role rolename="tomcat"/>
<role rolename="role1"/>
<user username="tomcat" password="<must-be-changed>" roles="tomcat"/>
<user username="both" password="<must-be-changed>" roles="tomcat,role1"/>
<user username="role1" password="<must-be-changed>" roles="role1"/>
-->
</tomcat-users>

View File

@@ -0,0 +1,169 @@
package net.woggioni.rbcs.servlet
import jakarta.annotation.PreDestroy
import jakarta.enterprise.context.ApplicationScoped
import jakarta.inject.Inject
import jakarta.servlet.annotation.WebServlet
import jakarta.servlet.http.HttpServlet
import jakarta.servlet.http.HttpServletRequest
import jakarta.servlet.http.HttpServletResponse
import net.woggioni.jwo.HttpClient.HttpStatus
import net.woggioni.jwo.JWO
import java.io.ByteArrayInputStream
import java.io.ByteArrayOutputStream
import java.nio.file.Path
import java.time.Duration
import java.time.Instant
import java.util.concurrent.ConcurrentHashMap
import java.util.concurrent.PriorityBlockingQueue
import java.util.concurrent.TimeUnit
import java.util.concurrent.atomic.AtomicLong
import java.util.logging.Logger
private class CacheKey(private val value: ByteArray) {
override fun equals(other: Any?) = if (other is CacheKey) {
value.contentEquals(other.value)
} else false
override fun hashCode() = value.contentHashCode()
}
@ApplicationScoped
open class InMemoryServletCache : AutoCloseable {
private val maxAge= Duration.ofDays(7)
private val maxSize = 0x8000000
companion object {
@JvmStatic
private val log = Logger.getLogger(this::class.java.name)
}
private val size = AtomicLong()
private val map = ConcurrentHashMap<CacheKey, ByteArray>()
private class RemovalQueueElement(val key: CacheKey, val value: ByteArray, val expiry: Instant) :
Comparable<RemovalQueueElement> {
override fun compareTo(other: RemovalQueueElement) = expiry.compareTo(other.expiry)
}
private val removalQueue = PriorityBlockingQueue<RemovalQueueElement>()
@Volatile
private var running = false
private val garbageCollector = Thread.ofVirtual().name("in-memory-cache-gc").start {
while (running) {
val el = removalQueue.poll(1, TimeUnit.SECONDS) ?: continue
val value = el.value
val now = Instant.now()
if (now > el.expiry) {
val removed = map.remove(el.key, value)
if (removed) {
updateSizeAfterRemoval(value)
}
} else {
removalQueue.put(el)
Thread.sleep(minOf(Duration.between(now, el.expiry), Duration.ofSeconds(1)))
}
}
}
private fun removeEldest(): Long {
while (true) {
val el = removalQueue.take()
val value = el.value
val removed = map.remove(el.key, value)
if (removed) {
val newSize = updateSizeAfterRemoval(value)
return newSize
}
}
}
private fun updateSizeAfterRemoval(removed: ByteArray): Long {
return size.updateAndGet { currentSize: Long ->
currentSize - removed.size
}
}
@PreDestroy
override fun close() {
running = false
garbageCollector.join()
}
open fun get(key: ByteArray) = map[CacheKey(key)]
open fun put(
key: ByteArray,
value: ByteArray,
) {
val cacheKey = CacheKey(key)
val oldSize = map.put(cacheKey, value)?.let { old ->
val result = old.size
result
} ?: 0
val delta = value.size - oldSize
var newSize = size.updateAndGet { currentSize: Long ->
currentSize + delta
}
removalQueue.put(RemovalQueueElement(cacheKey, value, Instant.now().plus(maxAge)))
while (newSize > maxSize) {
newSize = removeEldest()
}
}
}
@WebServlet(urlPatterns = ["/cache/*"])
class CacheServlet : HttpServlet() {
companion object {
@JvmStatic
private val log = Logger.getLogger(this::class.java.name)
}
@Inject
private lateinit var cache : InMemoryServletCache
private fun getKey(req : HttpServletRequest) : String {
return Path.of(req.pathInfo).fileName.toString()
}
override fun doPut(req: HttpServletRequest, resp: HttpServletResponse) {
val baos = ByteArrayOutputStream()
baos.use {
JWO.copy(req.inputStream, baos)
}
val key = getKey(req)
cache.put(key.toByteArray(Charsets.UTF_8), baos.toByteArray())
resp.status = 201
resp.setContentLength(0)
log.fine {
"[${Thread.currentThread().name}] Added value for key $key"
}
}
override fun doGet(req: HttpServletRequest, resp: HttpServletResponse) {
val key = getKey(req)
val value = cache.get(key.toByteArray(Charsets.UTF_8))
if (value == null) {
log.fine {
"[${Thread.currentThread().name}] Cache miss for key $key"
}
resp.status = HttpStatus.NOT_FOUND.code
resp.setContentLength(0)
} else {
log.fine {
"[${Thread.currentThread().name}] Cache hit for key $key"
}
resp.status = HttpStatus.OK.code
resp.setContentLength(value.size)
ByteArrayInputStream(value).use {
JWO.copy(it, resp.outputStream)
}
}
}
}

Some files were not shown because too many files have changed in this diff Show More