Compare commits

...

19 Commits

Author SHA1 Message Date
69f215e68f tuned GC parameters in Docker images
All checks were successful
CI / build (push) Successful in 16m35s
2025-03-24 14:42:04 +08:00
222b475223 ensured in-memory-cache is allocated to heap memory
All checks were successful
CI / build (push) Successful in 12m29s
2025-03-11 12:29:43 +08:00
ede515e2ca rebuild native image with wider ISA compatibility
All checks were successful
CI / build (push) Successful in 14m13s
2025-03-10 22:28:55 +08:00
974fdb7a91 added k8s benchmark files
All checks were successful
CI / build (push) Successful in 15m39s
2025-03-10 10:09:30 +08:00
a294229ff0 fixed memory leak in MemcachedCacheHandler 2025-03-09 22:16:03 +08:00
9600dd7e4f solved issue with ignored HttpContent and HttpCacheContent messages in the Netty pipeline
All checks were successful
CI / build (push) Successful in 12m48s
2025-03-09 13:57:52 +08:00
729276a2b1 fixed native image configuration
All checks were successful
CI / build (push) Successful in 14m12s
2025-03-08 14:35:50 +08:00
7ba7070693 fixed server support for request pipelining
All checks were successful
CI / build (push) Successful in 15m33s
2025-03-08 11:07:21 +08:00
59a12d6218 added server support for request pipelining
Some checks failed
CI / build (push) Has been cancelled
2025-03-07 11:53:42 +08:00
fc298de548 made chunk size a global shared parameter between the server and the cache backends 2025-03-06 22:08:19 +08:00
8b639fc0b3 added request pipelining support to RemoteBuildCacheClient 2025-03-06 21:58:53 +08:00
5545f618f9 updated documentation 2025-03-04 10:40:05 +08:00
43c0938d9a added test case 2025-03-04 09:35:26 +08:00
17215b401a fixed shutdown issue
All checks were successful
CI / build (push) Successful in 16m6s
2025-03-03 22:02:00 +08:00
4aced1c717 moved to GraalVM CE
All checks were successful
CI / build (push) Successful in 15m34s
2025-03-03 17:53:12 +08:00
31ce34cddb simplified InMemoryCache implementation 2025-03-03 09:44:37 +08:00
d64f7f4f27 added performance benchmarks 2025-02-28 13:49:05 +08:00
d15235fc4c fixed missing plugin in native image
All checks were successful
CI / build (push) Successful in 34m59s
2025-02-27 23:30:33 +08:00
49bb4f41b8 improved documentation 2025-02-26 21:40:58 +08:00
61 changed files with 1247 additions and 551 deletions

150
README.md
View File

@@ -1,4 +1,45 @@
# Remote Build Cache Server
![Release](https://img.shields.io/gitea/v/release/woggioni/rbcs?gitea_url=https%3A%2F%2Fgitea.woggioni.net)
![Version](https://img.shields.io/maven-metadata/v?metadataUrl=https%3A%2F%2Fgitea.woggioni.net%2Fapi%2Fpackages%2Fwoggioni%2Fmaven%2Fnet%2Fwoggioni%2Frbcs-cli%2Fmaven-metadata.xml)
![License](https://img.shields.io/badge/license-MIT-green)
![Language](https://img.shields.io/gitea/languages/count/woggioni/rbcs?gitea_url=https%3A%2F%2Fgitea.woggioni.net)
<!--
![Last commit](https://img.shields.io/gitea/last-commit/woggioni/rbcs?gitea_url=https%3A%2F%2Fgitea.woggioni.net)
-->
Speed up your builds by sharing and reusing unchanged build outputs across your team.
Remote Build Cache Server (RBCS) allows teams to share and reuse unchanged build and test outputs,
significantly reducing build times for both local and CI environments. By eliminating redundant work,
RBCS helps teams become more productive and efficient.
**Key Features:**
- Support for both Gradle and Maven build environments
- Pluggable storage backends (in-memory, disk-backed, memcached)
- Flexible authentication (HTTP basic or TLS certificate)
- Role-based access control
- Request throttling
## Table of Contents
- [Quickstart](#quickstart)
- [Integration with build tools](#integration-with-build-tools)
- [Use RBCS with Gradle](#use-rbcs-with-gradle)
- [Use RBCS with Maven](#use-rbcs-with-maven)
- [Server configuration](#server-configuration)
- [Authentication](#authentication)
- [HTTP Basic authentication](#configure-http-basic-authentication)
- [TLS client certificate authentication](#configure-tls-certificate-authentication)
- [Authentication & Access Control](#access-control)
- [Plugins](#plugins)
- [Client Tools](#rbcs-client)
- [Logging](#logging)
- [Performance](#performance)
- [FAQ](#faq)
Remote Build Cache Server (shortened to RBCS) allows you to share and reuse unchanged build
and test outputs across the team. This speeds up local and CI builds since cycles are not wasted
re-building components that are unaffected by new code changes. RBCS supports both Gradle and
@@ -12,7 +53,7 @@ and throttling.
## Quickstart
### Downloading the jar file
### Use the all-in-one jar file
You can download the latest version from [this link](https://gitea.woggioni.net/woggioni/-/packages/maven/net.woggioni:rbcs-cli/)
@@ -25,7 +66,7 @@ java -jar rbcs-cli.jar server
By default it will start an HTTP server bound to localhost and listening on port 8080 with no authentication,
writing data to the disk, that you can use for testing
### Using the Docker image
### Use the Docker image
You can pull the latest Docker image with
```bash
docker pull gitea.woggioni.net/woggioni/rbcs:latest
@@ -34,41 +75,20 @@ docker pull gitea.woggioni.net/woggioni/rbcs:latest
By default it will start an HTTP server bound to localhost and listening on port 8080 with no authentication,
writing data to the disk, that you can use for testing
### Using the native executable
### Use the native executable
If you are on a Linux X86_64 machine you can download the native executable
from [here](https://gitea.woggioni.net/woggioni/-/packages/maven/net.woggioni:rbcs-cli/).
It behaves the same as the jar file but it doesn't require a JVM and it has faster startup times.
becausue of GraalVm's [closed-world assumption](https://www.graalvm.org/latest/reference-manual/native-image/basics/#static-analysis),
because of GraalVM's [closed-world assumption](https://www.graalvm.org/latest/reference-manual/native-image/basics/#static-analysis),
the native executable does not supports plugins, so it comes with all plugins embedded into it.
## Usage
> [!WARNING]
> The native executable is built with `-march=skylake`, so it may fail with SIGILL on x86 CPUs that do not support
> the full skylake instruction set (as a rule of thumb, older than 2015)
### Configuration
The location of the `rbcs-server.xml` configuration file depends on the operating system,
Alternatively it can be changed setting the `RBCS_CONFIGURATION_DIR` environmental variable or `net.woggioni.rbcs.conf.dir` Java system property
to the directory that contain the `rbcs-server.xml` file.
## Integration with build tools
The server configuration file follows the XML format and uses XML schema for validation
(you can find the schema for the main configuration file [here](https://gitea.woggioni.net/woggioni/rbcs/src/branch/master/rbcs-server/src/main/resources/net/woggioni/rbcs/server/schema/rbcs-server.xsd)).
The configuration values are enclosed inside XML attribute and support system property / environmental variable interpolation.
As an example, you can configure RBCS to read the server port number from the `RBCS_SERVER_PORT` environmental variable
and the bind address from the `rbc.bind.address` JVM system property with
```xml
<bind host="${sys:rpc.bind.address}" port="${env:RBCS_SERVER_PORT}"/>
```
Full documentation for all tags and attributes is available [here](doc/server_configuration.md).
### Plugins
If you want to use memcache as a storage backend you'll also need to download [the memcache plugin](https://gitea.woggioni.net/woggioni/-/packages/maven/net.woggioni:rbcs-server-memcache/)
Plugins need to be stored in a folder named `plugins` in the located server's working directory
(the directory where the server process is started). They are shipped as TAR archives, so you need to extract
the content of the archive into the `plugins` directory for the server to pick them up.
### Using RBCS with Gradle
### Use RBCS with Gradle
Add this to the `settings.gradle` file of your project
@@ -113,7 +133,7 @@ add `org.gradle.caching=true` to your `<project>/gradle.properties` or run gradl
Read [Gradle documentation](https://docs.gradle.org/current/userguide/build_cache.html) for more detailed information.
### Using RBCS with Maven
### Use RBCS with Maven
1. Create an `extensions.xml` in `<project>/.mvn/extensions.xml` with the following content
```xml
@@ -143,6 +163,46 @@ Alternatively you can set those properties in your `<project>/pom.xml`
Read [here](https://maven.apache.org/extensions/maven-build-cache-extension/remote-cache.html)
for more informations
## Server configuration
RBCS reads an XML configuration file, by default named `rbcs-server.xml`.
The expected location of the `rbcs-server.xml` file depends on the operating system,
if the configuration file is not found a default one will be created and its location is printed
on the console
```bash
user@76a90cbcd75d:~$ rbcs-cli server
2025-01-01 00:00:00,000 [INFO ] (main) n.w.r.c.impl.commands.ServerCommand -- Creating default configuration file at '/home/user/.config/rbcs/rbcs-server.xml'
```
Alternatively it can be changed setting the `RBCS_CONFIGURATION_DIR` environmental variable or `net.woggioni.rbcs.conf.dir`
Java system property to the directory that contain the `rbcs-server.xml` file.
It can also be directly specified from the command line with
```bash
java -jar rbcs-cli.jar server -c /path/to/rbcs-server.xml
```
The server configuration file follows the XML format and uses XML schema for validation
(you can find the schema for the `rbcs-server.xml` configuration file [here](https://gitea.woggioni.net/woggioni/rbcs/src/branch/master/rbcs-server/src/main/resources/net/woggioni/rbcs/server/schema/rbcs-server.xsd)).
The configuration values are enclosed inside XML attribute and support system property / environmental variable interpolation.
As an example, you can configure RBCS to read the server port number from the `RBCS_SERVER_PORT` environmental variable
and the bind address from the `rbc.bind.address` JVM system property with
```xml
<bind host="${sys:rpc.bind.address}" port="${env:RBCS_SERVER_PORT}"/>
```
Full documentation for all tags and attributes and configuration file examples
are available [here](doc/server_configuration.md).
### Plugins
If you want to use memcache as a storage backend you'll also need to download [the memcache plugin](https://gitea.woggioni.net/woggioni/-/packages/maven/net.woggioni:rbcs-server-memcache/)
Plugins need to be stored in a folder named `plugins` in the located server's working directory
(the directory where the server process is started). They are shipped as TAR archives, so you need to extract
the content of the archive into the `plugins` directory for the server to pick them up.
## Authentication
RBCS supports 2 authentication mechanisms:
@@ -250,7 +310,11 @@ as a health check (mind you need to have `Healthcheck` role in order to perform
RBCS ships with a command line client that can be used for testing, benchmarking or to manually
upload/download files to the cache. It must be configured with the `rbcs-client.xml`,
whose location follows the same logic of the `rbcs-server.xml`
whose location follows the same logic of the `rbcs-server.xml`.
The `rbcs-client.xml` must adhere to the [rbcs-client.xsd](rbcs-client/src/main/resources/net/woggioni/rbcs/client/schema/rbcs-client.xsd)
XML schema
The documentation for the `rbcs-client.xml` configuration file is available [here](conf/client_configuration.md)
### GET command
@@ -263,6 +327,24 @@ java -jar rbcs-cli.jar client -p $CLIENT_PROFILE_NAME get -k $CACHE_KEY -v $FILE
```bash
java -jar rbcs-cli.jar client -p $CLIENT_PROFILE_NAME put -k $CACHE_KEY -v $FILE_TO_BE_UPLOADED
```
If you don't specify the key, a UUID key based on the file content will be used,
if you add the `-i` command line parameter, the uploaded file will be served with
`Content-Disposition: inline` HTTP header so that browser will attempt to render
it in the page instead of triggering a file download (in this way you can create a temporary web page).
The client will try to detect the file mime type upon upload but if you want to be sure you can specify
it manually with the `-t` parameter.
### Benchmark command
```bash
java -jar rbcs-cli.jar client -p $CLIENT_PROFILE_NAME benchamrk -s 4096 -e 10000
```
This will insert 10000 randomly generates entries of 4096 bytes into RBCS, then retrieve them
and check that the retrieved value matches what was inserted.
It will also print throughput stats on the way.
## Logging
RBCS uses [logback](https://logback.qos.ch/) and ships with a [default logging configuration](./conf/logback.xml) that
@@ -270,6 +352,10 @@ can be overridden with `-Dlogback.configurationFile=path/to/custom/configuration
[Logback documentation](https://logback.qos.ch/manual/configuration.html) for more details about
how to configure Logback
## Performance
You can check performance benchmarks [here](doc/benchmarks.md)
## FAQ
### Why should I use a build cache?

View File

@@ -0,0 +1,93 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: rbcs-server
data:
rbcs-server.xml: |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<rbcs:server xmlns:xs="http://www.w3.org/2001/XMLSchema-instance"
xmlns:rbcs="urn:net.woggioni.rbcs.server"
xmlns:rbcs-memcache="urn:net.woggioni.rbcs.server.memcache"
xs:schemaLocation="urn:net.woggioni.rbcs.server.memcache jpms://net.woggioni.rbcs.server.memcache/net/woggioni/rbcs/server/memcache/schema/rbcs-memcache.xsd urn:net.woggioni.rbcs.server jpms://net.woggioni.rbcs.server/net/woggioni/rbcs/server/schema/rbcs-server.xsd"
>
<bind host="0.0.0.0" port="8080" incoming-connections-backlog-size="128"/>
<connection
max-request-size="0xd000000"
idle-timeout="PT15S"
read-idle-timeout="PT30S"
write-idle-timeout="PT30S"/>
<event-executor use-virtual-threads="true"/>
<cache xs:type="rbcs:fileSystemCacheType" max-age="P7D" enable-compression="false" path="/rbcs/cache"/>
</rbcs:server>
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbcs-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 16Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rbcs-deployment
labels:
app: rbcs
spec:
replicas: 1
selector:
matchLabels:
app: rbcs
template:
metadata:
labels:
app: rbcs
spec:
containers:
- name: rbcs
image: gitea.woggioni.net/woggioni/rbcs:native
imagePullPolicy: Always
args: ['server', '-c', 'rbcs-server.xml']
ports:
- containerPort: 8080
volumeMounts:
- name: config-volume
mountPath: /rbcs/rbcs-server.xml
subPath: rbcs-server.xml
- name: cache-volume
mountPath: /rbcs/cache
resources:
requests:
memory: "0.25Gi"
cpu: "1"
limits:
memory: "0.25Gi"
cpu: "3.5"
volumes:
- name: config-volume
configMap:
name: rbcs-server
- name: cache-volume
persistentVolumeClaim:
claimName: rbcs-pvc
---
apiVersion: v1
kind: Service
metadata:
name: rbcs-service
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080
protocol: TCP
selector:
app: rbcs

View File

@@ -0,0 +1,76 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: rbcs-server
data:
rbcs-server.xml: |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<rbcs:server xmlns:xs="http://www.w3.org/2001/XMLSchema-instance"
xmlns:rbcs="urn:net.woggioni.rbcs.server"
xmlns:rbcs-memcache="urn:net.woggioni.rbcs.server.memcache"
xs:schemaLocation="urn:net.woggioni.rbcs.server.memcache jpms://net.woggioni.rbcs.server.memcache/net/woggioni/rbcs/server/memcache/schema/rbcs-memcache.xsd urn:net.woggioni.rbcs.server jpms://net.woggioni.rbcs.server/net/woggioni/rbcs/server/schema/rbcs-server.xsd"
>
<bind host="0.0.0.0" port="8080" incoming-connections-backlog-size="128"/>
<connection
max-request-size="0xd000000"
idle-timeout="PT15S"
read-idle-timeout="PT30S"
write-idle-timeout="PT30S"/>
<event-executor use-virtual-threads="true"/>
<cache xs:type="rbcs:inMemoryCacheType" max-age="P7D" enable-compression="false" max-size="0xb0000000" />
</rbcs:server>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rbcs-deployment
labels:
app: rbcs
spec:
replicas: 1
selector:
matchLabels:
app: rbcs
template:
metadata:
labels:
app: rbcs
spec:
containers:
- name: rbcs
image: gitea.woggioni.net/woggioni/rbcs:native
imagePullPolicy: Always
args: ['server', '-c', 'rbcs-server.xml']
ports:
- containerPort: 8080
volumeMounts:
- name: config-volume
mountPath: /rbcs/rbcs-server.xml
subPath: rbcs-server.xml
resources:
requests:
memory: "0.5Gi"
cpu: "1"
limits:
memory: "4Gi"
cpu: "3.5"
volumes:
- name: config-volume
configMap:
name: rbcs-server
---
apiVersion: v1
kind: Service
metadata:
name: rbcs-service
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080
protocol: TCP
selector:
app: rbcs

117
benchmark/rbcs-memcache.yml Normal file
View File

@@ -0,0 +1,117 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: rbcs-server
data:
rbcs-server.xml: |
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<rbcs:server xmlns:xs="http://www.w3.org/2001/XMLSchema-instance"
xmlns:rbcs="urn:net.woggioni.rbcs.server"
xmlns:rbcs-memcache="urn:net.woggioni.rbcs.server.memcache"
xs:schemaLocation="urn:net.woggioni.rbcs.server.memcache jpms://net.woggioni.rbcs.server.memcache/net/woggioni/rbcs/server/memcache/schema/rbcs-memcache.xsd urn:net.woggioni.rbcs.server jpms://net.woggioni.rbcs.server/net/woggioni/rbcs/server/schema/rbcs-server.xsd"
>
<bind host="0.0.0.0" port="8080" incoming-connections-backlog-size="128"/>
<connection
max-request-size="0xd000000"
idle-timeout="PT15S"
read-idle-timeout="PT30S"
write-idle-timeout="PT30S"/>
<event-executor use-virtual-threads="true"/>
<!--cache xs:type="rbcs:inMemoryCacheType" max-age="P7D" enable-compression="false" max-size="0x10000000" /-->
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" chunk-size="0x1000" digest="MD5">
<server host="memcached-service" port="11211" max-connections="256"/>
</cache>
</rbcs:server>
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rbcs-deployment
labels:
app: rbcs
spec:
replicas: 1
selector:
matchLabels:
app: rbcs
template:
metadata:
labels:
app: rbcs
spec:
containers:
- name: rbcs
image: gitea.woggioni.net/woggioni/rbcs:native
imagePullPolicy: Always
args: ['server', '-c', 'rbcs-server.xml']
ports:
- containerPort: 8080
volumeMounts:
- name: config-volume
mountPath: /rbcs/rbcs-server.xml
subPath: rbcs-server.xml
resources:
requests:
memory: "0.25Gi"
cpu: "1"
limits:
memory: "0.25Gi"
cpu: "1"
volumes:
- name: config-volume
configMap:
name: rbcs-server
---
apiVersion: v1
kind: Service
metadata:
name: rbcs-service
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080
protocol: TCP
selector:
app: rbcs
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: memcached-deployment
spec:
replicas: 1
selector:
matchLabels:
app: memcached
template:
metadata:
labels:
app: memcached
spec:
containers:
- name: memcached
image: memcached
args: ["-I", "128m", "-m", "4096"]
resources:
requests:
memory: "1Gi"
cpu: "500m" # 0.5 CPU
limits:
memory: "5Gi"
cpu: "500m" # 0.5 CP
---
apiVersion: v1
kind: Service
metadata:
name: memcached-service
spec:
type: ClusterIP # ClusterIP makes it accessible only within the cluster
ports:
- port: 11211 # Default memcached port
targetPort: 11211
protocol: TCP
selector:
app: memcached

View File

@@ -38,8 +38,7 @@ allprojects { subproject ->
withSourcesJar()
modularity.inferModulePath = true
toolchain {
languageVersion = JavaLanguageVersion.of(23)
vendor = JvmVendorSpec.ORACLE
languageVersion = JavaLanguageVersion.of(21)
}
}

87
doc/benchmarks.md Normal file
View File

@@ -0,0 +1,87 @@
# RBCS performance benchmarks
All test were executed under the following conditions:
- CPU: Intel Celeron J3455 (4 physical cores)
- memory: 8GB DDR3L 1600 MHz
- disk: SATA3 120GB SSD
- HTTP compression: disabled
- cache compression: disabled
- digest: none
- authentication: disabled
- TLS: disabled
- network RTT: 14ms
- network bandwidth: 112 MiB/s
### In memory cache backend
| Cache backend | CPU | CPU quota | Memory quota (GB) | Request size (b) | Client connections | PUT (req/s) | GET (req/s) |
|----------------|---------------------|-----------|-------------------|------------------|--------------------|-------------|-------------|
| in-memory | Intel Celeron J3455 | 1.00 | 4 | 128 | 10 | 3691 | 4037 |
| in-memory | Intel Celeron J3455 | 1.00 | 4 | 128 | 100 | 6881 | 7483 |
| in-memory | Intel Celeron J3455 | 1.00 | 4 | 512 | 10 | 3790 | 4069 |
| in-memory | Intel Celeron J3455 | 1.00 | 4 | 512 | 100 | 6716 | 7408 |
| in-memory | Intel Celeron J3455 | 1.00 | 4 | 4096 | 10 | 3399 | 1974 |
| in-memory | Intel Celeron J3455 | 1.00 | 4 | 4096 | 100 | 5341 | 6402 |
| in-memory | Intel Celeron J3455 | 1.00 | 4 | 65536 | 10 | 1099 | 1116 |
| in-memory | Intel Celeron J3455 | 1.00 | 4 | 65536 | 100 | 1379 | 1703 |
| in-memory | Intel Celeron J3455 | 3.50 | 4 | 128 | 10 | 4443 | 5170 |
| in-memory | Intel Celeron J3455 | 3.50 | 4 | 128 | 100 | 12813 | 13568 |
| in-memory | Intel Celeron J3455 | 3.50 | 4 | 512 | 10 | 4450 | 4383 |
| in-memory | Intel Celeron J3455 | 3.50 | 4 | 512 | 100 | 12212 | 13586 |
| in-memory | Intel Celeron J3455 | 3.50 | 4 | 4096 | 10 | 3441 | 3012 |
| in-memory | Intel Celeron J3455 | 3.50 | 4 | 4096 | 100 | 8982 | 10452 |
| in-memory | Intel Celeron J3455 | 3.50 | 4 | 65536 | 10 | 1391 | 1167 |
| in-memory | Intel Celeron J3455 | 3.50 | 4 | 65536 | 100 | 1303 | 1151 |
### Filesystem cache backend
compression: disabled
digest: none
authentication: disabled
TLS: disabled
| Cache backend | CPU | CPU quota | Memory quota (GB) | Request size (b) | Client connections | PUT (req/s) | GET (req/s) |
|---------------|---------------------|-----------|-------------------|------------------|--------------------|-------------|-------------|
| filesystem | Intel Celeron J3455 | 1.00 | 0.25 | 128 | 10 | 1208 | 2048 |
| filesystem | Intel Celeron J3455 | 1.00 | 0.25 | 128 | 100 | 1304 | 2394 |
| filesystem | Intel Celeron J3455 | 1.00 | 0.25 | 512 | 10 | 1408 | 2157 |
| filesystem | Intel Celeron J3455 | 1.00 | 0.25 | 512 | 100 | 1282 | 1888 |
| filesystem | Intel Celeron J3455 | 1.00 | 0.25 | 4096 | 10 | 1291 | 1256 |
| filesystem | Intel Celeron J3455 | 1.00 | 0.25 | 4096 | 100 | 1170 | 1423 |
| filesystem | Intel Celeron J3455 | 1.00 | 0.25 | 65536 | 10 | 313 | 606 |
| filesystem | Intel Celeron J3455 | 1.00 | 0.25 | 65536 | 100 | 298 | 609 |
| filesystem | Intel Celeron J3455 | 3.50 | 0.25 | 128 | 10 | 2195 | 3477 |
| filesystem | Intel Celeron J3455 | 3.50 | 0.25 | 128 | 100 | 2480 | 6207 |
| filesystem | Intel Celeron J3455 | 3.50 | 0.25 | 512 | 10 | 2164 | 3413 |
| filesystem | Intel Celeron J3455 | 3.50 | 0.25 | 512 | 100 | 2842 | 6218 |
| filesystem | Intel Celeron J3455 | 3.50 | 0.25 | 4096 | 10 | 1302 | 2591 |
| filesystem | Intel Celeron J3455 | 3.50 | 0.25 | 4096 | 100 | 2270 | 3045 |
| filesystem | Intel Celeron J3455 | 3.50 | 0.25 | 65536 | 10 | 375 | 394 |
| filesystem | Intel Celeron J3455 | 3.50 | 0.25 | 65536 | 100 | 364 | 462 |
### Memcache cache backend
compression: disabled
digest: MD5
authentication: disabled
TLS: disabled
| Cache backend | CPU | CPU quota | Memory quota (GB) | Request size (b) | Client connections | PUT (req/s) | GET (req/s) |
|---------------|---------------------|-----------|-------------------|------------------|--------------------|-------------|-------------|
| memcache | Intel Celeron J3455 | 1.00 | 0.25 | 128 | 10 | 2505 | 2578 |
| memcache | Intel Celeron J3455 | 1.00 | 0.25 | 128 | 100 | 3582 | 3935 |
| memcache | Intel Celeron J3455 | 1.00 | 0.25 | 512 | 10 | 2495 | 2784 |
| memcache | Intel Celeron J3455 | 1.00 | 0.25 | 512 | 100 | 3565 | 3883 |
| memcache | Intel Celeron J3455 | 1.00 | 0.25 | 4096 | 10 | 2174 | 2505 |
| memcache | Intel Celeron J3455 | 1.00 | 0.25 | 4096 | 100 | 2937 | 3563 |
| memcache | Intel Celeron J3455 | 1.00 | 0.25 | 65536 | 10 | 648 | 1074 |
| memcache | Intel Celeron J3455 | 1.00 | 0.25 | 65536 | 100 | 724 | 1548 |
| memcache | Intel Celeron J3455 | 3.50 | 0.25 | 128 | 10 | 2362 | 2927 |
| memcache | Intel Celeron J3455 | 3.50 | 0.25 | 128 | 100 | 5491 | 6531 |
| memcache | Intel Celeron J3455 | 3.50 | 0.25 | 512 | 10 | 2125 | 2807 |
| memcache | Intel Celeron J3455 | 3.50 | 0.25 | 512 | 100 | 5173 | 6242 |
| memcache | Intel Celeron J3455 | 3.50 | 0.25 | 4096 | 10 | 1720 | 2397 |
| memcache | Intel Celeron J3455 | 3.50 | 0.25 | 4096 | 100 | 3871 | 5859 |
| memcache | Intel Celeron J3455 | 3.50 | 0.25 | 65536 | 10 | 616 | 1016 |
| memcache | Intel Celeron J3455 | 3.50 | 0.25 | 65536 | 100 | 820 | 1677 |

View File

@@ -24,6 +24,7 @@ Configures connection handling parameters.
- `read-idle-timeout` (optional, default: PT60S): Connection timeout when no reads
- `write-idle-timeout` (optional, default: PT60S): Connection timeout when no writes
- `max-request-size` (optional, default: 0x4000000): Maximum allowed request body size
- `chunk-size` (default: 0x10000): Maximum socket write size
#### `<event-executor>`
Configures event execution settings.
@@ -44,7 +45,6 @@ A simple storage backend that uses an hash map to store data in memory
- `digest` (default: MD5): Key hashing algorithm
- `enable-compression` (default: true): Enable deflate compression
- `compression-level` (default: -1): Compression level (-1 to 9)
- `chunk-size` (default: 0x10000): Maximum socket write size
##### FileSystem Cache
@@ -56,7 +56,6 @@ A storage backend that stores data in a folder on the disk
- `digest` (default: MD5): Key hashing algorithm
- `enable-compression` (default: true): Enable deflate compression
- `compression-level` (default: -1): Compression level
- `chunk-size` (default: 0x10000): Maximum in-memory cache value size
#### `<authorization>`
Configures user and group-based access control.
@@ -134,12 +133,24 @@ Configures TLS encryption.
idle-timeout="PT10S"
read-idle-timeout="PT20S"
write-idle-timeout="PT20S"
read-timeout="PT5S"
write-timeout="PT5S"/>
chunk-size="0x1000"/>
<event-executor use-virtual-threads="true"/>
<cache xs:type="rbcs:inMemoryCacheType" max-age="P7D" enable-compression="false" max-size="0x10000000" />
<!--cache xs:type="rbcs:fileSystemCacheType" max-age="P7D" enable-compression="false" path="${sys:java.io.tmpdir}/rbcs"/-->
<authorization>
<!-- uncomment this to enable the filesystem storage backend, sotring cache data in "${sys:java.io.tmpdir}/rbcs"
<cache xs:type="rbcs:fileSystemCacheType" max-age="P7D" enable-compression="false" path="${sys:java.io.tmpdir}/rbcs"/>
-->
<!-- uncomment this to use memcache as the storage backend, also make sure you have
the memcache plugin installed in the `plugins` directory if you are using running
the jar version of RBCS
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" digest="MD5">
<server host="127.0.0.1" port="11211" max-connections="256"/>
</cache>
-->
<authorization>
<users>
<user name="user1" password="II+qeNLft2pZ/JVNo9F7jpjM/BqEcfsJW27NZ6dPVs8tAwHbxrJppKYsbL7J/SMl">
<quota calls="100" period="PT1S"/>

View File

@@ -5,7 +5,7 @@ WORKDIR /home/luser
FROM base-release AS release-vanilla
ADD rbcs-cli-envelope-*.jar rbcs.jar
ENTRYPOINT ["java", "-XX:+UseSerialGC", "-XX:GCTimeRatio=24", "-jar", "/home/luser/rbcs.jar", "server"]
ENTRYPOINT ["java", "-Dlogback.configurationFile=logback.xml", "-XX:MaxRAMPercentage=70", "-XX:GCTimeRatio=24", "-XX:+UseZGC", "-XX:+ZGenerational", "-jar", "/home/luser/rbcs.jar"]
FROM base-release AS release-memcache
ADD --chown=luser:luser rbcs-cli-envelope-*.jar rbcs.jar
@@ -14,10 +14,10 @@ WORKDIR /home/luser/plugins
RUN --mount=type=bind,source=.,target=/build/distributions tar -xf /build/distributions/rbcs-server-memcache*.tar
WORKDIR /home/luser
ADD logback.xml .
ENTRYPOINT ["java", "-Dlogback.configurationFile=logback.xml", "-XX:+UseSerialGC", "-XX:GCTimeRatio=24", "-jar", "/home/luser/rbcs.jar", "server"]
ENTRYPOINT ["java", "-Dlogback.configurationFile=logback.xml", "-XX:MaxRAMPercentage=70", "-XX:GCTimeRatio=24", "-XX:+UseZGC", "-XX:+ZGenerational", "-jar", "/home/luser/rbcs.jar"]
FROM scratch AS release-native
ADD rbcs-cli.upx /rbcs/rbcs-cli
ENV RBCS_CONFIGURATION_DIR="/rbcs"
WORKDIR /rbcs
ENTRYPOINT ["/rbcs/rbcs-cli"]
ENTRYPOINT ["/rbcs/rbcs-cli", "-XX:MaximumHeapSizePercent=70"]

View File

@@ -2,9 +2,9 @@ org.gradle.configuration-cache=false
org.gradle.parallel=true
org.gradle.caching=true
rbcs.version = 0.2.0
rbcs.version = 0.2.1
lys.version = 2025.02.26
lys.version = 2025.03.08
gitea.maven.url = https://gitea.woggioni.net/api/packages/woggioni/maven
docker.registry.url=gitea.woggioni.net

View File

@@ -5,9 +5,12 @@ plugins {
}
dependencies {
implementation catalog.slf4j.api
implementation project(':rbcs-common')
api catalog.netty.common
api catalog.netty.buffer
api catalog.netty.handler
api catalog.netty.codec.http
}
publishing {

View File

@@ -1,10 +1,15 @@
module net.woggioni.rbcs.api {
requires static lombok;
requires java.xml;
requires io.netty.buffer;
requires io.netty.handler;
requires io.netty.transport;
requires io.netty.common;
requires net.woggioni.rbcs.common;
requires io.netty.transport;
requires io.netty.codec.http;
requires io.netty.buffer;
requires org.slf4j;
requires java.xml;
exports net.woggioni.rbcs.api;
exports net.woggioni.rbcs.api.exception;
exports net.woggioni.rbcs.api.message;

View File

@@ -0,0 +1,57 @@
package net.woggioni.rbcs.api;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.handler.codec.http.LastHttpContent;
import io.netty.util.ReferenceCounted;
import lombok.extern.slf4j.Slf4j;
import net.woggioni.rbcs.api.message.CacheMessage;
@Slf4j
public abstract class CacheHandler extends ChannelInboundHandlerAdapter {
private boolean requestFinished = false;
abstract protected void channelRead0(ChannelHandlerContext ctx, CacheMessage msg);
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
if(!requestFinished && msg instanceof CacheMessage) {
if(msg instanceof CacheMessage.LastCacheContent) requestFinished = true;
try {
channelRead0(ctx, (CacheMessage) msg);
} finally {
if(msg instanceof ReferenceCounted rc) rc.release();
}
} else {
ctx.fireChannelRead(msg);
}
}
protected void sendMessageAndFlush(ChannelHandlerContext ctx, Object msg) {
sendMessage(ctx, msg, true);
}
protected void sendMessage(ChannelHandlerContext ctx, Object msg) {
sendMessage(ctx, msg, false);
}
private void sendMessage(ChannelHandlerContext ctx, Object msg, boolean flush) {
ctx.write(msg);
if(
msg instanceof CacheMessage.LastCacheContent ||
msg instanceof CacheMessage.CachePutResponse ||
msg instanceof CacheMessage.CacheValueNotFoundResponse ||
msg instanceof LastHttpContent
) {
ctx.flush();
ctx.pipeline().remove(this);
} else if(flush) {
ctx.flush();
}
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
super.exceptionCaught(ctx, cause);
}
}

View File

@@ -1,13 +1,13 @@
package net.woggioni.rbcs.api;
import io.netty.channel.ChannelFactory;
import io.netty.channel.ChannelHandler;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.socket.DatagramChannel;
import io.netty.channel.socket.SocketChannel;
public interface CacheHandlerFactory extends AsyncCloseable {
ChannelHandler newHandler(
CacheHandler newHandler(
Configuration configuration,
EventLoopGroup eventLoopGroup,
ChannelFactory<SocketChannel> socketChannelFactory,
ChannelFactory<DatagramChannel> datagramChannelFactory

View File

@@ -1,10 +1,9 @@
package net.woggioni.rbcs.api;
import java.io.Serializable;
import lombok.Getter;
import lombok.RequiredArgsConstructor;
import java.io.Serializable;
@Getter
@RequiredArgsConstructor
public class CacheValueMetadata implements Serializable {

View File

@@ -1,16 +1,15 @@
package net.woggioni.rbcs.api;
import lombok.EqualsAndHashCode;
import lombok.NonNull;
import lombok.Value;
import java.nio.file.Path;
import java.security.cert.X509Certificate;
import java.time.Duration;
import java.util.Map;
import java.util.Set;
import java.util.stream.Collectors;
import lombok.EqualsAndHashCode;
import lombok.NonNull;
import lombok.Value;
@Value
public class Configuration {
@@ -39,6 +38,7 @@ public class Configuration {
Duration readIdleTimeout;
Duration writeIdleTimeout;
int maxRequestSize;
int chunkSize;
}
@Value

View File

@@ -12,6 +12,7 @@ plugins {
import net.woggioni.gradle.envelope.EnvelopePlugin
import net.woggioni.gradle.envelope.EnvelopeJarTask
import net.woggioni.gradle.graalvm.NativeImageConfigurationTask
import net.woggioni.gradle.graalvm.NativeImageTask
import net.woggioni.gradle.graalvm.NativeImagePlugin
import net.woggioni.gradle.graalvm.UpxTask
import net.woggioni.gradle.graalvm.JlinkPlugin
@@ -90,11 +91,10 @@ Provider<EnvelopeJarTask> envelopeJarTaskProvider = tasks.named(EnvelopePlugin.E
}
tasks.named(NativeImagePlugin.CONFIGURE_NATIVE_IMAGE_TASK_NAME, NativeImageConfigurationTask) {
javaLauncher = javaToolchains.launcherFor {
toolchain {
languageVersion = JavaLanguageVersion.of(21)
vendor = JvmVendorSpec.ORACLE
vendor = JvmVendorSpec.GRAAL_VM
}
mainClass = "net.woggioni.rbcs.cli.graal.GraalNativeImageConfiguration"
classpath = project.files(
configurations.configureNativeImageRuntimeClasspath,
@@ -105,9 +105,14 @@ tasks.named(NativeImagePlugin.CONFIGURE_NATIVE_IMAGE_TASK_NAME, NativeImageConfi
systemProperty('io.netty.leakDetectionLevel', 'DISABLED')
modularity.inferModulePath = false
enabled = true
systemProperty('gradle.tmp.dir', temporaryDir.toString())
}
nativeImage {
toolchain {
languageVersion = JavaLanguageVersion.of(23)
vendor = JvmVendorSpec.GRAAL_VM
}
mainClass = mainClassName
// mainModule = mainModuleName
useMusl = true

View File

@@ -0,0 +1,53 @@
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<rbcs:server xmlns:xs="http://www.w3.org/2001/XMLSchema-instance"
xmlns:rbcs="urn:net.woggioni.rbcs.server"
xmlns:rbcs-memcache="urn:net.woggioni.rbcs.server.memcache"
xs:schemaLocation="urn:net.woggioni.rbcs.server.memcache jpms://net.woggioni.rbcs.server.memcache/net/woggioni/rbcs/server/memcache/schema/rbcs-memcache.xsd urn:net.woggioni.rbcs.server jpms://net.woggioni.rbcs.server/net/woggioni/rbcs/server/schema/rbcs-server.xsd"
>
<bind host="127.0.0.1" port="8080" incoming-connections-backlog-size="1024"/>
<connection
max-request-size="67108864"
idle-timeout="PT10S"
read-idle-timeout="PT20S"
write-idle-timeout="PT20S"/>
<event-executor use-virtual-threads="true"/>
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" chunk-size="0x1000" digest="MD5">
<server host="127.0.0.1" port="11211" max-connections="256"/>
</cache>
<!--cache xs:type="rbcs:inMemoryCacheType" max-age="P7D" enable-compression="false" max-size="0x10000000" /-->
<!--cache xs:type="rbcs:fileSystemCacheType" max-age="P7D" enable-compression="false" /-->
<authorization>
<users>
<user name="woggioni" password="II+qeNLft2pZ/JVNo9F7jpjM/BqEcfsJW27NZ6dPVs8tAwHbxrJppKYsbL7J/SMl">
<quota calls="100" period="PT1S"/>
</user>
<user name="gitea" password="v6T9+q6/VNpvLknji3ixPiyz2YZCQMXj2FN7hvzbfc2Ig+IzAHO0iiBCH9oWuBDq"/>
<anonymous>
<quota calls="10" period="PT60S" initial-available-calls="10" max-available-calls="10"/>
</anonymous>
</users>
<groups>
<group name="readers">
<users>
<anonymous/>
</users>
<roles>
<reader/>
</roles>
</group>
<group name="writers">
<users>
<user ref="woggioni"/>
<user ref="gitea"/>
</users>
<roles>
<reader/>
<writer/>
</roles>
</group>
</groups>
</authorization>
<authentication>
<none/>
</authentication>
</rbcs:server>

View File

@@ -1,2 +1,2 @@
Args=-O3 --gc=serial --install-exit-handlers --initialize-at-run-time=io.netty --enable-url-protocols=jpms --initialize-at-build-time=net.woggioni.rbcs.common.RbcsUrlStreamHandlerFactory,net.woggioni.rbcs.common.RbcsUrlStreamHandlerFactory$JpmsHandler
Args=-O3 -march=x86-64-v2 --gc=serial --install-exit-handlers --initialize-at-run-time=io.netty --enable-url-protocols=jpms --initialize-at-build-time=net.woggioni.rbcs.common.RbcsUrlStreamHandlerFactory,net.woggioni.rbcs.common.RbcsUrlStreamHandlerFactory$JpmsHandler
#-H:TraceClassInitialization=io.netty.handler.ssl.BouncyCastleAlpnSslUtils

View File

@@ -487,6 +487,10 @@
"name":"jdk.internal.misc.Unsafe",
"methods":[{"name":"getUnsafe","parameterTypes":[] }]
},
{
"name":"net.woggioni.rbcs.api.CacheHandler",
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
},
{
"name":"net.woggioni.rbcs.cli.RemoteBuildCacheServerCli",
"allDeclaredFields":true,
@@ -552,11 +556,7 @@
},
{
"name":"net.woggioni.rbcs.client.RemoteBuildCacheClient$sendRequest$1$operationComplete$responseHandler$1",
"methods":[{"name":"channelInactive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
},
{
"name":"net.woggioni.rbcs.client.RemoteBuildCacheClient$sendRequest$1$operationComplete$timeoutHandler$1",
"methods":[{"name":"userEventTriggered","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
"methods":[{"name":"channelInactive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }, {"name":"userEventTriggered","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
},
{
"name":"net.woggioni.rbcs.server.RemoteBuildCacheServer$HttpChunkContentCompressor",
@@ -588,17 +588,13 @@
"name":"net.woggioni.rbcs.server.exception.ExceptionHandler",
"methods":[{"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
},
{
"name":"net.woggioni.rbcs.server.handler.CacheContentHandler",
"methods":[{"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
},
{
"name":"net.woggioni.rbcs.server.handler.MaxRequestSizeHandler",
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
},
{
"name":"net.woggioni.rbcs.server.handler.ServerHandler",
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }, {"name":"write","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object","io.netty.channel.ChannelPromise"] }]
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }, {"name":"channelReadComplete","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }, {"name":"write","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object","io.netty.channel.ChannelPromise"] }]
},
{
"name":"net.woggioni.rbcs.server.handler.TraceHandler",

View File

@@ -36,6 +36,8 @@
"pattern":"\\Qnet/woggioni/rbcs/server/rbcs-default.xml\\E"
}, {
"pattern":"\\Qnet/woggioni/rbcs/server/schema/rbcs-server.xsd\\E"
}, {
"pattern":"\\Q/net/woggioni/rbcs/server/memcache/schema/rbcs-memcache.xsd\\E"
}, {
"pattern":"java.base:\\Qsun/text/resources/LineBreakIteratorData\\E"
}]},

View File

@@ -32,8 +32,9 @@ object GraalNativeImageConfiguration {
@JvmStatic
fun main(vararg args : String) {
val serverDoc = RemoteBuildCacheServer.DEFAULT_CONFIGURATION_URL.openStream().use {
Xml.parseXml(RemoteBuildCacheServer.DEFAULT_CONFIGURATION_URL, it)
val serverURL = URI.create("file:conf/rbcs-server.xml").toURL()
val serverDoc = serverURL.openStream().use {
Xml.parseXml(serverURL, it)
}
Parser.parse(serverDoc)
@@ -70,7 +71,6 @@ object GraalNativeImageConfiguration {
compressionLevel = Deflater.DEFAULT_COMPRESSION,
compressionEnabled = false,
maxSize = 0x1000000,
chunkSize = 0x1000
),
FileSystemCacheConfiguration(
Path.of(System.getProperty("java.io.tmpdir")).resolve("rbcs"),
@@ -78,7 +78,6 @@ object GraalNativeImageConfiguration {
digestAlgorithm = "MD5",
compressionLevel = Deflater.DEFAULT_COMPRESSION,
compressionEnabled = false,
chunkSize = 0x1000
),
MemcacheCacheConfiguration(
listOf(MemcacheCacheConfiguration.Server(
@@ -90,7 +89,6 @@ object GraalNativeImageConfiguration {
"MD5",
null,
1,
0x1000
)
)
@@ -106,6 +104,7 @@ object GraalNativeImageConfiguration {
Duration.ofSeconds(15),
Duration.ofSeconds(15),
0x10000,
0x1000
),
users.asSequence().map { it.name to it }.toMap(),
sequenceOf(writersGroup, readersGroup).map { it.name to it }.toMap(),
@@ -126,7 +125,6 @@ object GraalNativeImageConfiguration {
"MD5",
null,
1,
0x1000
)
val serverHandle = RemoteBuildCacheServer(serverConfiguration).run()
@@ -134,7 +132,12 @@ object GraalNativeImageConfiguration {
val clientProfile = ClientConfiguration.Profile(
URI.create("http://127.0.0.1:$serverPort/"),
null,
ClientConfiguration.Connection(
Duration.ofSeconds(5),
Duration.ofSeconds(5),
Duration.ofSeconds(7),
true,
),
ClientConfiguration.Authentication.BasicAuthenticationCredentials("user3", PASSWORD),
Duration.ofSeconds(3),
10,
@@ -176,6 +179,8 @@ object GraalNativeImageConfiguration {
} catch (ee : ExecutionException) {
}
}
RemoteBuildCacheServerCli.main("--help")
System.setProperty("net.woggioni.rbcs.conf.dir", System.getProperty("gradle.tmp.dir"))
RemoteBuildCacheServerCli.createCommandLine().execute("--version")
RemoteBuildCacheServerCli.createCommandLine().execute("server", "-t", "PT10S")
}
}

View File

@@ -26,8 +26,8 @@ class RemoteBuildCacheServerCli : RbcsCommand() {
private fun setPropertyIfNotPresent(key: String, value: String) {
System.getProperty(key) ?: System.setProperty(key, value)
}
@JvmStatic
fun main(vararg args: String) {
fun createCommandLine() : CommandLine {
setPropertyIfNotPresent("logback.configurationFile", "net/woggioni/rbcs/cli/logback.xml")
setPropertyIfNotPresent("io.netty.leakDetectionLevel", "DISABLED")
val currentClassLoader = RemoteBuildCacheServerCli::class.java.classLoader
@@ -56,7 +56,12 @@ class RemoteBuildCacheServerCli : RbcsCommand() {
addSubcommand(GetCommand())
addSubcommand(HealthCheckCommand())
})
System.exit(commandLine.execute(*args))
return commandLine
}
@JvmStatic
fun main(vararg args: String) {
System.exit(createCommandLine().execute(*args))
}
}

View File

@@ -6,7 +6,6 @@ import net.woggioni.rbcs.client.Configuration
import net.woggioni.rbcs.common.createLogger
import net.woggioni.rbcs.common.debug
import picocli.CommandLine
import java.lang.IllegalArgumentException
import java.nio.file.Path
@CommandLine.Command(

View File

@@ -38,11 +38,12 @@ data class Configuration(
val readIdleTimeout: Duration,
val writeIdleTimeout: Duration,
val idleTimeout: Duration,
val requestPipelining : Boolean,
)
data class Profile(
val serverURI: URI,
val connection: Connection?,
val connection: Connection,
val authentication: Authentication?,
val connectionTimeout: Duration?,
val maxConnections: Int,

View File

@@ -4,9 +4,7 @@ import io.netty.bootstrap.Bootstrap
import io.netty.buffer.ByteBuf
import io.netty.buffer.Unpooled
import io.netty.channel.Channel
import io.netty.channel.ChannelHandler
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.ChannelInboundHandlerAdapter
import io.netty.channel.ChannelOption
import io.netty.channel.ChannelPipeline
import io.netty.channel.SimpleChannelInboundHandler
@@ -55,7 +53,7 @@ import kotlin.random.Random
import io.netty.util.concurrent.Future as NettyFuture
class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoCloseable {
companion object{
companion object {
private val log = createLogger<RemoteBuildCacheClient>()
}
@@ -73,7 +71,7 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
*tlsClientAuthenticationCredentials.certificateChain
)
profile.tlsTruststore?.let { trustStore ->
if(!trustStore.verifyServerCertificate) {
if (!trustStore.verifyServerCertificate) {
trustManager(object : X509TrustManager {
override fun checkClientTrusted(certChain: Array<out X509Certificate>, p1: String?) {
}
@@ -176,7 +174,7 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
// HTTP handlers
pipeline.addLast("codec", HttpClientCodec())
if(profile.compressionEnabled) {
if (profile.compressionEnabled) {
pipeline.addLast("decompressor", HttpContentDecompressor())
}
pipeline.addLast("aggregator", HttpObjectAggregator(134217728))
@@ -297,48 +295,32 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
// Custom handler for processing responses
pool.acquire().addListener(object : GenericFutureListener<NettyFuture<Channel>> {
private val handlers = mutableListOf<ChannelHandler>()
fun cleanup(channel: Channel, pipeline: ChannelPipeline) {
handlers.forEach(pipeline::remove)
pool.release(channel)
}
override fun operationComplete(channelFuture: Future<Channel>) {
if (channelFuture.isSuccess) {
val channel = channelFuture.now
val pipeline = channel.pipeline()
val timeoutHandler = object : ChannelInboundHandlerAdapter() {
override fun userEventTriggered(ctx: ChannelHandlerContext, evt: Any) {
if (evt is IdleStateEvent) {
val te = when (evt.state()) {
IdleState.READER_IDLE -> TimeoutException(
"Read timeout",
)
IdleState.WRITER_IDLE -> TimeoutException("Write timeout")
IdleState.ALL_IDLE -> TimeoutException("Idle timeout")
null -> throw IllegalStateException("This should never happen")
}
responseFuture.completeExceptionally(te)
ctx.close()
}
}
}
val closeListener = GenericFutureListener<Future<Void>> {
responseFuture.completeExceptionally(IOException("The remote server closed the connection"))
pool.release(channel)
}
channel.closeFuture().addListener(closeListener)
val responseHandler = object : SimpleChannelInboundHandler<FullHttpResponse>() {
override fun handlerAdded(ctx: ChannelHandlerContext) {
channel.closeFuture().removeListener(closeListener)
}
override fun channelRead0(
ctx: ChannelHandlerContext,
response: FullHttpResponse
) {
channel.closeFuture().removeListener(closeListener)
cleanup(channel, pipeline)
pipeline.remove(this)
responseFuture.complete(response)
if(!profile.connection.requestPipelining) {
pool.release(channel)
}
}
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
@@ -352,16 +334,39 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
}
override fun channelInactive(ctx: ChannelHandlerContext) {
pool.release(channel)
responseFuture.completeExceptionally(IOException("The remote server closed the connection"))
if(!profile.connection.requestPipelining) {
pool.release(channel)
}
super.channelInactive(ctx)
}
override fun userEventTriggered(ctx: ChannelHandlerContext, evt: Any) {
if (evt is IdleStateEvent) {
val te = when (evt.state()) {
IdleState.READER_IDLE -> TimeoutException(
"Read timeout",
)
IdleState.WRITER_IDLE -> TimeoutException("Write timeout")
IdleState.ALL_IDLE -> TimeoutException("Idle timeout")
null -> throw IllegalStateException("This should never happen")
}
responseFuture.completeExceptionally(te)
super.userEventTriggered(ctx, evt)
if (this === pipeline.last()) {
ctx.close()
}
if(!profile.connection.requestPipelining) {
pool.release(channel)
}
} else {
super.userEventTriggered(ctx, evt)
}
}
}
for (handler in arrayOf(timeoutHandler, responseHandler)) {
handlers.add(handler)
}
pipeline.addLast(timeoutHandler, responseHandler)
channel.closeFuture().addListener(closeListener)
pipeline.addLast(responseHandler)
// Prepare the HTTP request
@@ -373,13 +378,14 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
uri.rawPath,
content ?: Unpooled.buffer(0)
).apply {
// Set headers
headers().apply {
if (content != null) {
set(HttpHeaderNames.CONTENT_LENGTH, content.readableBytes())
}
set(HttpHeaderNames.HOST, profile.serverURI.host)
set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE)
if(profile.compressionEnabled) {
if (profile.compressionEnabled) {
set(
HttpHeaderNames.ACCEPT_ENCODING,
HttpHeaderValues.GZIP.toString() + "," + HttpHeaderValues.DEFLATE.toString()
@@ -398,9 +404,16 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
}
}
// Set headers
// Send the request
channel.writeAndFlush(request)
channel.writeAndFlush(request).addListener {
if(!it.isSuccess) {
val ex = it.cause()
log.warn(ex.message, ex)
}
if(profile.connection.requestPipelining) {
pool.release(channel)
}
}
} else {
responseFuture.completeExceptionally(channelFuture.cause())
}

View File

@@ -30,7 +30,12 @@ object Parser {
?: throw ConfigurationException("base-url attribute is required")
var authentication: Configuration.Authentication? = null
var retryPolicy: Configuration.RetryPolicy? = null
var connection : Configuration.Connection? = null
var connection : Configuration.Connection = Configuration.Connection(
Duration.ofSeconds(60),
Duration.ofSeconds(60),
Duration.ofSeconds(30),
false
)
var trustStore : Configuration.TrustStore? = null
for (gchild in child.asIterable()) {
when (gchild.localName) {
@@ -97,10 +102,13 @@ object Parser {
?.let(Duration::parse) ?: Duration.of(60, ChronoUnit.SECONDS)
val writeIdleTimeout = gchild.renderAttribute("write-idle-timeout")
?.let(Duration::parse) ?: Duration.of(60, ChronoUnit.SECONDS)
val requestPipelining = gchild.renderAttribute("request-pipelining")
?.let(String::toBoolean) ?: false
connection = Configuration.Connection(
readIdleTimeout,
writeIdleTimeout,
idleTimeout,
requestPipelining
)
}

View File

@@ -123,6 +123,13 @@
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="request-pipelining" type="xs:boolean" use="optional" default="false">
<xs:annotation>
<xs:documentation>
Enables HTTP/1.1 request pipelining
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:complexType>
<xs:complexType name="noAuthType">

View File

@@ -6,7 +6,7 @@ plugins {
}
dependencies {
implementation project(':rbcs-api')
implementation catalog.netty.transport
implementation catalog.slf4j.api
implementation catalog.jwo
implementation catalog.netty.buffer

View File

@@ -22,7 +22,7 @@ The plugins currently supports the following configuration attributes:
- `digest`: digest algorithm to use on the key before submission
to memcache (optional, no digest is applied if omitted)
- `compression`: compression algorithm to apply to cache values before,
currently only `deflate` is supported (optionla, if omitted compression is disabled)
currently only `deflate` is supported (optional, if omitted compression is disabled)
- `compression-level`: compression level to use, deflate supports compression levels from 1 to 9,
where 1 is for fast compression at the expense of speed (optional, 6 is used if omitted)
```xml
@@ -37,8 +37,7 @@ The plugins currently supports the following configuration attributes:
max-age="P7D"
digest="SHA-256"
compression-mode="deflate"
compression-level="6"
chunk-size="0x10000">
compression-level="6">
<server host="127.0.0.1" port="11211" max-connections="256"/>
<server host="127.0.0.1" port="11212" max-connections="256"/>
</cache>

View File

@@ -1,14 +1,15 @@
package net.woggioni.rbcs.server.memcache
import io.netty.channel.ChannelFactory
import io.netty.channel.ChannelHandler
import io.netty.channel.EventLoopGroup
import io.netty.channel.pool.FixedChannelPool
import io.netty.channel.socket.DatagramChannel
import io.netty.channel.socket.SocketChannel
import net.woggioni.rbcs.api.CacheHandler
import net.woggioni.rbcs.api.CacheHandlerFactory
import net.woggioni.rbcs.api.Configuration
import net.woggioni.rbcs.common.HostAndPort
import net.woggioni.rbcs.common.createLogger
import net.woggioni.rbcs.server.memcache.client.MemcacheClient
import java.time.Duration
import java.util.concurrent.CompletableFuture
@@ -22,9 +23,12 @@ data class MemcacheCacheConfiguration(
val digestAlgorithm: String? = null,
val compressionMode: CompressionMode? = null,
val compressionLevel: Int,
val chunkSize: Int
) : Configuration.Cache {
companion object {
private val log = createLogger<MemcacheCacheConfiguration>()
}
enum class CompressionMode {
/**
* Deflate mode
@@ -43,14 +47,15 @@ data class MemcacheCacheConfiguration(
private val connectionPoolMap = ConcurrentHashMap<HostAndPort, FixedChannelPool>()
override fun newHandler(
cfg : Configuration,
eventLoop: EventLoopGroup,
socketChannelFactory: ChannelFactory<SocketChannel>,
datagramChannelFactory: ChannelFactory<DatagramChannel>
): ChannelHandler {
datagramChannelFactory: ChannelFactory<DatagramChannel>,
): CacheHandler {
return MemcacheCacheHandler(
MemcacheClient(
this@MemcacheCacheConfiguration.servers,
chunkSize,
cfg.connection.chunkSize,
eventLoop,
socketChannelFactory,
connectionPoolMap
@@ -58,7 +63,7 @@ data class MemcacheCacheConfiguration(
digestAlgorithm,
compressionMode != null,
compressionLevel,
chunkSize,
cfg.connection.chunkSize,
maxAge
)
}
@@ -69,15 +74,19 @@ data class MemcacheCacheConfiguration(
val pools = connectionPoolMap.values.toList()
val npools = pools.size
val finished = AtomicInteger(0)
pools.forEach { pool ->
pool.closeAsync().addListener {
if (!it.isSuccess) {
failure.compareAndSet(null, it.cause())
}
if(finished.incrementAndGet() == npools) {
when(val ex = failure.get()) {
null -> complete(null)
else -> completeExceptionally(ex)
if (pools.isEmpty()) {
complete(null)
} else {
pools.forEach { pool ->
pool.closeAsync().addListener {
if (!it.isSuccess) {
failure.compareAndSet(null, it.cause())
}
if (finished.incrementAndGet() == npools) {
when (val ex = failure.get()) {
null -> complete(null)
else -> completeExceptionally(ex)
}
}
}
}

View File

@@ -4,7 +4,6 @@ import io.netty.buffer.ByteBuf
import io.netty.buffer.ByteBufAllocator
import io.netty.buffer.CompositeByteBuf
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.SimpleChannelInboundHandler
import io.netty.handler.codec.memcache.DefaultLastMemcacheContent
import io.netty.handler.codec.memcache.DefaultMemcacheContent
import io.netty.handler.codec.memcache.LastMemcacheContent
@@ -13,6 +12,7 @@ import io.netty.handler.codec.memcache.binary.BinaryMemcacheOpcodes
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponse
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponseStatus
import io.netty.handler.codec.memcache.binary.DefaultBinaryMemcacheRequest
import net.woggioni.rbcs.api.CacheHandler
import net.woggioni.rbcs.api.CacheValueMetadata
import net.woggioni.rbcs.api.exception.ContentTooLargeException
import net.woggioni.rbcs.api.message.CacheMessage
@@ -58,7 +58,7 @@ class MemcacheCacheHandler(
private val compressionLevel: Int,
private val chunkSize: Int,
private val maxAge: Duration
) : SimpleChannelInboundHandler<CacheMessage>() {
) : CacheHandler() {
companion object {
private val log = createLogger<MemcacheCacheHandler>()
@@ -69,10 +69,14 @@ class MemcacheCacheHandler(
}
}
private interface InProgressRequest {
}
private inner class InProgressGetRequest(
private val key: String,
val key: String,
private val ctx: ChannelHandlerContext
) {
) : InProgressRequest {
private val acc = ctx.alloc().compositeBuffer()
private val chunk = ctx.alloc().compositeBuffer()
private val outputStream = ByteBufOutputStream(chunk).let {
@@ -98,32 +102,35 @@ class MemcacheCacheHandler(
acc.retain()
it.readObject() as CacheValueMetadata
}
ctx.writeAndFlush(CacheValueFoundResponse(key, metadata))
log.trace(ctx) {
"Sending response from cache"
}
sendMessageAndFlush(ctx, CacheValueFoundResponse(key, metadata))
responseSent = true
acc.readerIndex(Int.SIZE_BYTES + mSize)
}
if (responseSent) {
acc.readBytes(outputStream, acc.readableBytes())
if(acc.readableBytes() >= chunkSize) {
if (acc.readableBytes() >= chunkSize) {
flush(false)
}
}
}
private fun flush(last : Boolean) {
private fun flush(last: Boolean) {
val toSend = extractChunk(chunk, ctx.alloc())
val msg = if(last) {
val msg = if (last) {
log.trace(ctx) {
"Sending last chunk to client on channel ${ctx.channel().id().asShortText()}"
"Sending last chunk to client"
}
LastCacheContent(toSend)
} else {
log.trace(ctx) {
"Sending chunk to client on channel ${ctx.channel().id().asShortText()}"
"Sending chunk to client"
}
CacheContent(toSend)
}
ctx.writeAndFlush(msg)
sendMessageAndFlush(ctx, msg)
}
fun commit() {
@@ -141,14 +148,14 @@ class MemcacheCacheHandler(
}
private inner class InProgressPutRequest(
private val ch : NettyChannel,
metadata : CacheValueMetadata,
val digest : ByteBuf,
private val ch: NettyChannel,
metadata: CacheValueMetadata,
val digest: ByteBuf,
val requestController: CompletableFuture<MemcacheRequestController>,
private val alloc: ByteBufAllocator
) {
) : InProgressRequest {
private var totalSize = 0
private var tmpFile : FileChannel? = null
private var tmpFile: FileChannel? = null
private val accumulator = alloc.compositeBuffer()
private val stream = ByteBufOutputStream(accumulator).let {
if (compressionEnabled) {
@@ -175,7 +182,7 @@ class MemcacheCacheHandler(
tmpFile?.let {
flushToDisk(it, accumulator)
}
if(accumulator.readableBytes() > 0x100000) {
if (accumulator.readableBytes() > 0x100000) {
log.debug(ch) {
"Entry is too big, buffering it into a file"
}
@@ -192,18 +199,18 @@ class MemcacheCacheHandler(
}
}
private fun flushToDisk(fc : FileChannel, buf : CompositeByteBuf) {
private fun flushToDisk(fc: FileChannel, buf: CompositeByteBuf) {
val chunk = extractChunk(buf, alloc)
fc.write(chunk.nioBuffer())
chunk.release()
}
fun commit() : Pair<Int, ReadableByteChannel> {
fun commit(): Pair<Int, ReadableByteChannel> {
digest.release()
accumulator.retain()
stream.close()
val fileChannel = tmpFile
return if(fileChannel != null) {
return if (fileChannel != null) {
flushToDisk(fileChannel, accumulator)
accumulator.release()
fileChannel.position(0)
@@ -224,8 +231,7 @@ class MemcacheCacheHandler(
}
}
private var inProgressPutRequest: InProgressPutRequest? = null
private var inProgressGetRequest: InProgressGetRequest? = null
private var inProgressRequest: InProgressRequest? = null
override fun channelRead0(ctx: ChannelHandlerContext, msg: CacheMessage) {
when (msg) {
@@ -252,32 +258,39 @@ class MemcacheCacheHandler(
log.debug(ctx) {
"Cache hit for key ${msg.key} on memcache"
}
inProgressGetRequest = InProgressGetRequest(msg.key, ctx)
inProgressRequest = InProgressGetRequest(msg.key, ctx)
}
BinaryMemcacheResponseStatus.KEY_ENOENT -> {
log.debug(ctx) {
"Cache miss for key ${msg.key} on memcache"
}
ctx.writeAndFlush(CacheValueNotFoundResponse())
sendMessageAndFlush(ctx, CacheValueNotFoundResponse())
}
}
}
override fun contentReceived(content: MemcacheContent) {
log.trace(ctx) {
"${if(content is LastMemcacheContent) "Last chunk" else "Chunk"} of ${content.content().readableBytes()} bytes received from memcache for key ${msg.key}"
"${if (content is LastMemcacheContent) "Last chunk" else "Chunk"} of ${
content.content().readableBytes()
} bytes received from memcache for key ${msg.key}"
}
inProgressGetRequest?.write(content.content())
if (content is LastMemcacheContent) {
inProgressGetRequest?.commit()
(inProgressRequest as? InProgressGetRequest)?.let { inProgressGetRequest ->
inProgressGetRequest.write(content.content())
if (content is LastMemcacheContent) {
inProgressRequest = null
inProgressGetRequest.commit()
}
}
}
override fun exceptionCaught(ex: Throwable) {
inProgressGetRequest?.let {
inProgressGetRequest = null
it.rollback()
(inProgressRequest as? InProgressGetRequest).let { inProgressGetRequest ->
inProgressGetRequest?.let {
inProgressRequest = null
it.rollback()
}
}
this@MemcacheCacheHandler.exceptionCaught(ctx, ex)
}
@@ -290,6 +303,7 @@ class MemcacheCacheHandler(
setOpcode(BinaryMemcacheOpcodes.GET)
}
requestHandle.sendRequest(request)
requestHandle.sendContent(LastMemcacheContent.EMPTY_LAST_CONTENT)
}
}
@@ -305,8 +319,9 @@ class MemcacheCacheHandler(
log.debug(ctx) {
"Inserted key ${msg.key} into memcache"
}
ctx.writeAndFlush(CachePutResponse(msg.key))
sendMessageAndFlush(ctx, CachePutResponse(msg.key))
}
else -> this@MemcacheCacheHandler.exceptionCaught(ctx, MemcacheException(status))
}
}
@@ -323,86 +338,103 @@ class MemcacheCacheHandler(
this@MemcacheCacheHandler.exceptionCaught(ctx, ex)
}
}
inProgressPutRequest = InProgressPutRequest(ctx.channel(), msg.metadata, key, requestController, ctx.alloc())
inProgressRequest = InProgressPutRequest(ctx.channel(), msg.metadata, key, requestController, ctx.alloc())
}
private fun handleCacheContent(ctx: ChannelHandlerContext, msg: CacheContent) {
inProgressPutRequest?.let { request ->
log.trace(ctx) {
"Received chunk of ${msg.content().readableBytes()} bytes for memcache"
val request = inProgressRequest
when (request) {
is InProgressPutRequest -> {
log.trace(ctx) {
"Received chunk of ${msg.content().readableBytes()} bytes for memcache"
}
request.write(msg.content())
}
is InProgressGetRequest -> {
msg.release()
}
request.write(msg.content())
}
}
private fun handleLastCacheContent(ctx: ChannelHandlerContext, msg: LastCacheContent) {
inProgressPutRequest?.let { request ->
inProgressPutRequest = null
log.trace(ctx) {
"Received last chunk of ${msg.content().readableBytes()} bytes for memcache"
}
request.write(msg.content())
val key = request.digest.retainedDuplicate()
val (payloadSize, payloadSource) = request.commit()
val extras = ctx.alloc().buffer(8, 8)
extras.writeInt(0)
extras.writeInt(encodeExpiry(maxAge))
val totalBodyLength = request.digest.readableBytes() + extras.readableBytes() + payloadSize
request.requestController.whenComplete { requestController, ex ->
if(ex == null) {
log.trace(ctx) {
"Sending SET request to memcache"
}
requestController.sendRequest(DefaultBinaryMemcacheRequest().apply {
setOpcode(BinaryMemcacheOpcodes.SET)
setKey(key)
setExtras(extras)
setTotalBodyLength(totalBodyLength)
})
log.trace(ctx) {
"Sending request payload to memcache"
}
payloadSource.use { source ->
val bb = ByteBuffer.allocate(chunkSize)
while (true) {
val read = source.read(bb)
bb.limit()
if(read >= 0 && bb.position() < chunkSize && bb.hasRemaining()) {
continue
}
val chunk = ctx.alloc().buffer(chunkSize)
bb.flip()
chunk.writeBytes(bb)
bb.clear()
log.trace(ctx) {
"Sending ${chunk.readableBytes()} bytes chunk to memcache"
}
if(read < 0) {
requestController.sendContent(DefaultLastMemcacheContent(chunk))
break
} else {
requestController.sendContent(DefaultMemcacheContent(chunk))
val request = inProgressRequest
when (request) {
is InProgressPutRequest -> {
inProgressRequest = null
log.trace(ctx) {
"Received last chunk of ${msg.content().readableBytes()} bytes for memcache"
}
request.write(msg.content())
val key = request.digest.retainedDuplicate()
val (payloadSize, payloadSource) = request.commit()
val extras = ctx.alloc().buffer(8, 8)
extras.writeInt(0)
extras.writeInt(encodeExpiry(maxAge))
val totalBodyLength = request.digest.readableBytes() + extras.readableBytes() + payloadSize
log.trace(ctx) {
"Trying to send SET request to memcache"
}
request.requestController.whenComplete { requestController, ex ->
if (ex == null) {
log.trace(ctx) {
"Sending SET request to memcache"
}
requestController.sendRequest(DefaultBinaryMemcacheRequest().apply {
setOpcode(BinaryMemcacheOpcodes.SET)
setKey(key)
setExtras(extras)
setTotalBodyLength(totalBodyLength)
})
log.trace(ctx) {
"Sending request payload to memcache"
}
payloadSource.use { source ->
val bb = ByteBuffer.allocate(chunkSize)
while (true) {
val read = source.read(bb)
bb.limit()
if (read >= 0 && bb.position() < chunkSize && bb.hasRemaining()) {
continue
}
val chunk = ctx.alloc().buffer(chunkSize)
bb.flip()
chunk.writeBytes(bb)
bb.clear()
log.trace(ctx) {
"Sending ${chunk.readableBytes()} bytes chunk to memcache"
}
if (read < 0) {
requestController.sendContent(DefaultLastMemcacheContent(chunk))
break
} else {
requestController.sendContent(DefaultMemcacheContent(chunk))
}
}
}
} else {
payloadSource.close()
}
} else {
payloadSource.close()
}
}
}
}
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
inProgressGetRequest?.let {
inProgressGetRequest = null
it.rollback()
}
inProgressPutRequest?.let {
inProgressPutRequest = null
it.requestController.thenAccept { controller ->
controller.exceptionCaught(cause)
val request = inProgressRequest
when (request) {
is InProgressPutRequest -> {
inProgressRequest = null
request.requestController.thenAccept { controller ->
controller.exceptionCaught(cause)
}
request.rollback()
}
is InProgressGetRequest -> {
inProgressRequest = null
request.rollback()
}
it.rollback()
}
super.exceptionCaught(ctx, cause)
}

View File

@@ -28,9 +28,6 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
val maxAge = el.renderAttribute("max-age")
?.let(Duration::parse)
?: Duration.ofDays(1)
val chunkSize = el.renderAttribute("chunk-size")
?.let(Integer::decode)
?: 0x10000
val compressionLevel = el.renderAttribute("compression-level")
?.let(Integer::decode)
?: -1
@@ -63,8 +60,7 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
maxAge,
digestAlgorithm,
compressionMode,
compressionLevel,
chunkSize
compressionLevel
)
}
@@ -84,7 +80,6 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
}
}
attr("max-age", maxAge.toString())
attr("chunk-size", chunkSize.toString())
digestAlgorithm?.let { digestAlgorithm ->
attr("digest", digestAlgorithm)
}

View File

@@ -12,7 +12,6 @@ import io.netty.channel.ChannelPipeline
import io.netty.channel.EventLoopGroup
import io.netty.channel.SimpleChannelInboundHandler
import io.netty.channel.pool.AbstractChannelPoolHandler
import io.netty.channel.pool.ChannelPool
import io.netty.channel.pool.FixedChannelPool
import io.netty.channel.socket.SocketChannel
import io.netty.handler.codec.memcache.LastMemcacheContent
@@ -24,7 +23,7 @@ import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponse
import io.netty.util.concurrent.GenericFutureListener
import net.woggioni.rbcs.common.HostAndPort
import net.woggioni.rbcs.common.createLogger
import net.woggioni.rbcs.common.warn
import net.woggioni.rbcs.common.trace
import net.woggioni.rbcs.server.memcache.MemcacheCacheConfiguration
import net.woggioni.rbcs.server.memcache.MemcacheCacheHandler
import java.io.IOException
@@ -94,18 +93,6 @@ class MemcacheClient(
pool.acquire().addListener(object : GenericFutureListener<NettyFuture<Channel>> {
override fun operationComplete(channelFuture: NettyFuture<Channel>) {
if (channelFuture.isSuccess) {
var requestSent = false
var requestBodySent = false
var requestFinished = false
var responseReceived = false
var responseBodyReceived = false
var responseFinished = false
var requestBodySize = 0
var requestBodyBytesSent = 0
val channel = channelFuture.now
var connectionClosedByTheRemoteServer = true
val closeCallback = {
@@ -113,14 +100,7 @@ class MemcacheClient(
val ex = IOException("The memcache server closed the connection")
val completed = response.completeExceptionally(ex)
if(!completed) responseHandler.exceptionCaught(ex)
log.warn {
"RequestSent: $requestSent, RequestBodySent: $requestBodySent, " +
"RequestFinished: $requestFinished, ResponseReceived: $responseReceived, " +
"ResponseBodyReceived: $responseBodyReceived, ResponseFinished: $responseFinished, " +
"RequestBodySize: $requestBodySize, RequestBodyBytesSent: $requestBodyBytesSent"
}
}
pool.release(channel)
}
val closeListener = ChannelFutureListener {
closeCallback()
@@ -140,18 +120,14 @@ class MemcacheClient(
when (msg) {
is BinaryMemcacheResponse -> {
responseHandler.responseReceived(msg)
responseReceived = true
}
is LastMemcacheContent -> {
responseFinished = true
responseHandler.contentReceived(msg)
pipeline.remove(this)
pool.release(channel)
}
is MemcacheContent -> {
responseBodyReceived = true
responseHandler.contentReceived(msg)
}
}
@@ -165,35 +141,43 @@ class MemcacheClient(
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
connectionClosedByTheRemoteServer = false
ctx.close()
pool.release(channel)
responseHandler.exceptionCaught(cause)
}
}
channel.pipeline()
.addLast("client-handler", handler)
channel.pipeline().addLast(handler)
response.complete(object : MemcacheRequestController {
private var channelReleased = false
override fun sendRequest(request: BinaryMemcacheRequest) {
requestBodySize = request.totalBodyLength() - request.keyLength() - request.extrasLength()
channel.writeAndFlush(request)
requestSent = true
}
override fun sendContent(content: MemcacheContent) {
val size = content.content().readableBytes()
channel.writeAndFlush(content).addListener {
requestBodyBytesSent += size
requestBodySent = true
if(content is LastMemcacheContent) {
requestFinished = true
if(!channelReleased) {
pool.release(channel)
channelReleased = true
log.trace(channel) {
"Channel released"
}
}
}
}
}
override fun exceptionCaught(ex: Throwable) {
log.warn(ex.message, ex)
connectionClosedByTheRemoteServer = false
channel.close()
if(!channelReleased) {
pool.release(channel)
channelReleased = true
log.trace(channel) {
"Channel released"
}
}
}
})
} else {

View File

@@ -24,6 +24,7 @@ module net.woggioni.rbcs.server {
opens net.woggioni.rbcs.server;
opens net.woggioni.rbcs.server.schema;
uses CacheProvider;
provides CacheProvider with FileSystemCacheProvider, InMemoryCacheProvider;
}

View File

@@ -21,6 +21,7 @@ import io.netty.channel.socket.nio.NioSocketChannel
import io.netty.handler.codec.compression.CompressionOptions
import io.netty.handler.codec.http.DefaultHttpContent
import io.netty.handler.codec.http.HttpContentCompressor
import io.netty.handler.codec.http.HttpDecoderConfig
import io.netty.handler.codec.http.HttpHeaderNames
import io.netty.handler.codec.http.HttpRequest
import io.netty.handler.codec.http.HttpServerCodec
@@ -53,9 +54,9 @@ import net.woggioni.rbcs.server.auth.RoleAuthorizer
import net.woggioni.rbcs.server.configuration.Parser
import net.woggioni.rbcs.server.configuration.Serializer
import net.woggioni.rbcs.server.exception.ExceptionHandler
import net.woggioni.rbcs.server.handler.BlackHoleRequestHandler
import net.woggioni.rbcs.server.handler.MaxRequestSizeHandler
import net.woggioni.rbcs.server.handler.ServerHandler
import net.woggioni.rbcs.server.handler.TraceHandler
import net.woggioni.rbcs.server.throttling.BucketManager
import net.woggioni.rbcs.server.throttling.ThrottlingHandler
import java.io.OutputStream
@@ -298,6 +299,7 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
"Closed connection ${ch.id().asShortText()} with ${ch.remoteAddress()}"
}
}
ch.config().setAutoRead(false)
val pipeline = ch.pipeline()
cfg.connection.also { conn ->
val readIdleTimeout = conn.readIdleTimeout.toMillis()
@@ -340,7 +342,10 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
sslContext?.newHandler(ch.alloc())?.also {
pipeline.addLast(SSL_HANDLER_NAME, it)
}
pipeline.addLast(HttpServerCodec())
val httpDecoderConfig = HttpDecoderConfig().apply {
maxChunkSize = cfg.connection.chunkSize
}
pipeline.addLast(HttpServerCodec(httpDecoderConfig))
pipeline.addLast(MaxRequestSizeHandler.NAME, MaxRequestSizeHandler(cfg.connection.maxRequestSize))
pipeline.addLast(HttpChunkContentCompressor(1024))
pipeline.addLast(ChunkedWriteHandler())
@@ -351,13 +356,13 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
val serverHandler = let {
val prefix = Path.of("/").resolve(Path.of(cfg.serverPath ?: "/"))
ServerHandler(prefix)
ServerHandler(prefix) {
cacheHandlerFactory.newHandler(cfg, ch.eventLoop(), channelFactory, datagramChannelFactory)
}
}
pipeline.addLast(eventExecutorGroup, ServerHandler.NAME, serverHandler)
pipeline.addLast(cacheHandlerFactory.newHandler(ch.eventLoop(), channelFactory, datagramChannelFactory))
pipeline.addLast(TraceHandler)
pipeline.addLast(ExceptionHandler)
pipeline.addLast(ExceptionHandler.NAME, ExceptionHandler)
pipeline.addLast(BlackHoleRequestHandler.NAME, BlackHoleRequestHandler())
}
override fun asyncClose() = cacheHandlerFactory.asyncClose()
@@ -368,13 +373,14 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
private val bossGroup: EventExecutorGroup,
private val executorGroups: Iterable<EventExecutorGroup>,
private val serverInitializer: AsyncCloseable,
) : Future<Void> by from(closeFuture, executorGroups, serverInitializer) {
) : Future<Void> by from(closeFuture, bossGroup, executorGroups, serverInitializer) {
companion object {
private val log = createLogger<ServerHandle>()
private fun from(
closeFuture: ChannelFuture,
bossGroup: EventExecutorGroup,
executorGroups: Iterable<EventExecutorGroup>,
serverInitializer: AsyncCloseable
): CompletableFuture<Void> {
@@ -382,22 +388,15 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
closeFuture.addListener {
val errors = mutableListOf<Throwable>()
val deadline = Instant.now().plusSeconds(20)
try {
serverInitializer.close()
} catch (ex: Throwable) {
log.error(ex.message, ex)
errors.addLast(ex)
}
serverInitializer.asyncClose().whenComplete { _, ex ->
serverInitializer.asyncClose().whenCompleteAsync { _, ex ->
if(ex != null) {
log.error(ex.message, ex)
errors.addLast(ex)
}
executorGroups.map {
it.shutdownGracefully()
}
executorGroups.forEach(EventExecutorGroup::shutdownGracefully)
bossGroup.terminationFuture().sync()
for (executorGroup in executorGroups) {
val future = executorGroup.terminationFuture()

View File

@@ -17,7 +17,6 @@ data class FileSystemCacheConfiguration(
val digestAlgorithm : String?,
val compressionEnabled: Boolean,
val compressionLevel: Int,
val chunkSize: Int,
) : Configuration.Cache {
override fun materialize() = object : CacheHandlerFactory {
@@ -26,10 +25,11 @@ data class FileSystemCacheConfiguration(
override fun asyncClose() = cache.asyncClose()
override fun newHandler(
cfg : Configuration,
eventLoop: EventLoopGroup,
socketChannelFactory: ChannelFactory<SocketChannel>,
datagramChannelFactory: ChannelFactory<DatagramChannel>
) = FileSystemCacheHandler(cache, digestAlgorithm, compressionEnabled, compressionLevel, chunkSize)
) = FileSystemCacheHandler(cache, digestAlgorithm, compressionEnabled, compressionLevel, cfg.connection.chunkSize)
}
override fun getNamespaceURI() = RBCS.RBCS_NAMESPACE_URI

View File

@@ -2,9 +2,9 @@ package net.woggioni.rbcs.server.cache
import io.netty.buffer.ByteBuf
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.SimpleChannelInboundHandler
import io.netty.handler.codec.http.LastHttpContent
import io.netty.handler.stream.ChunkedNioFile
import net.woggioni.rbcs.api.CacheHandler
import net.woggioni.rbcs.api.message.CacheMessage
import net.woggioni.rbcs.api.message.CacheMessage.CacheContent
import net.woggioni.rbcs.api.message.CacheMessage.CacheGetRequest
@@ -26,12 +26,18 @@ class FileSystemCacheHandler(
private val compressionEnabled: Boolean,
private val compressionLevel: Int,
private val chunkSize: Int
) : SimpleChannelInboundHandler<CacheMessage>() {
) : CacheHandler() {
private interface InProgressRequest{
}
private class InProgressGetRequest(val request : CacheGetRequest) : InProgressRequest
private inner class InProgressPutRequest(
val key : String,
private val fileSink : FileSystemCache.FileSink
) {
) : InProgressRequest {
private val stream = Channels.newOutputStream(fileSink.channel).let {
if (compressionEnabled) {
@@ -55,7 +61,7 @@ class FileSystemCacheHandler(
}
}
private var inProgressPutRequest: InProgressPutRequest? = null
private var inProgressRequest: InProgressRequest? = null
override fun channelRead0(ctx: ChannelHandlerContext, msg: CacheMessage) {
when (msg) {
@@ -68,55 +74,64 @@ class FileSystemCacheHandler(
}
private fun handleGetRequest(ctx: ChannelHandlerContext, msg: CacheGetRequest) {
val key = String(Base64.getUrlEncoder().encode(processCacheKey(msg.key, digestAlgorithm)))
cache.get(key)?.also { entryValue ->
ctx.writeAndFlush(CacheValueFoundResponse(msg.key, entryValue.metadata))
entryValue.channel.let { channel ->
if(compressionEnabled) {
InflaterInputStream(Channels.newInputStream(channel)).use { stream ->
inProgressRequest = InProgressGetRequest(msg)
outerLoop@
while (true) {
val buf = ctx.alloc().heapBuffer(chunkSize)
while(buf.readableBytes() < chunkSize) {
val read = buf.writeBytes(stream, chunkSize)
if(read < 0) {
ctx.writeAndFlush(LastCacheContent(buf))
break@outerLoop
}
}
ctx.writeAndFlush(CacheContent(buf))
}
}
} else {
ctx.writeAndFlush(ChunkedNioFile(channel, entryValue.offset, entryValue.size - entryValue.offset, chunkSize))
ctx.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT)
}
}
} ?: ctx.writeAndFlush(CacheValueNotFoundResponse())
}
private fun handlePutRequest(ctx: ChannelHandlerContext, msg: CachePutRequest) {
val key = String(Base64.getUrlEncoder().encode(processCacheKey(msg.key, digestAlgorithm)))
val sink = cache.put(key, msg.metadata)
inProgressPutRequest = InProgressPutRequest(msg.key, sink)
inProgressRequest = InProgressPutRequest(msg.key, sink)
}
private fun handleCacheContent(ctx: ChannelHandlerContext, msg: CacheContent) {
inProgressPutRequest!!.write(msg.content())
val request = inProgressRequest
if(request is InProgressPutRequest) {
request.write(msg.content())
}
}
private fun handleLastCacheContent(ctx: ChannelHandlerContext, msg: LastCacheContent) {
inProgressPutRequest?.let { request ->
inProgressPutRequest = null
request.write(msg.content())
request.commit()
ctx.writeAndFlush(CachePutResponse(request.key))
when(val request = inProgressRequest) {
is InProgressPutRequest -> {
inProgressRequest = null
request.write(msg.content())
request.commit()
sendMessageAndFlush(ctx, CachePutResponse(request.key))
}
is InProgressGetRequest -> {
val key = String(Base64.getUrlEncoder().encode(processCacheKey(request.request.key, digestAlgorithm)))
cache.get(key)?.also { entryValue ->
sendMessageAndFlush(ctx, CacheValueFoundResponse(request.request.key, entryValue.metadata))
entryValue.channel.let { channel ->
if(compressionEnabled) {
InflaterInputStream(Channels.newInputStream(channel)).use { stream ->
outerLoop@
while (true) {
val buf = ctx.alloc().heapBuffer(chunkSize)
while(buf.readableBytes() < chunkSize) {
val read = buf.writeBytes(stream, chunkSize)
if(read < 0) {
sendMessageAndFlush(ctx, LastCacheContent(buf))
break@outerLoop
}
}
sendMessageAndFlush(ctx, CacheContent(buf))
}
}
} else {
sendMessage(ctx, ChunkedNioFile(channel, entryValue.offset, entryValue.size - entryValue.offset, chunkSize))
sendMessageAndFlush(ctx, LastHttpContent.EMPTY_LAST_CONTENT)
}
}
} ?: sendMessageAndFlush(ctx, CacheValueNotFoundResponse())
}
}
}
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
inProgressPutRequest?.rollback()
(inProgressRequest as? InProgressPutRequest)?.rollback()
super.exceptionCaught(ctx, cause)
}
}

View File

@@ -31,9 +31,6 @@ class FileSystemCacheProvider : CacheProvider<FileSystemCacheConfiguration> {
?.let(String::toInt)
?: Deflater.DEFAULT_COMPRESSION
val digestAlgorithm = el.renderAttribute("digest")
val chunkSize = el.renderAttribute("chunk-size")
?.let(Integer::decode)
?: 0x10000
return FileSystemCacheConfiguration(
path,
@@ -41,7 +38,6 @@ class FileSystemCacheProvider : CacheProvider<FileSystemCacheConfiguration> {
digestAlgorithm,
enableCompression,
compressionLevel,
chunkSize
)
}
@@ -63,7 +59,6 @@ class FileSystemCacheProvider : CacheProvider<FileSystemCacheConfiguration> {
}?.let {
attr("compression-level", it.toString())
}
attr("chunk-size", chunkSize.toString())
}
result
}

View File

@@ -6,11 +6,11 @@ import net.woggioni.rbcs.api.CacheValueMetadata
import net.woggioni.rbcs.common.createLogger
import java.time.Duration
import java.time.Instant
import java.util.PriorityQueue
import java.util.concurrent.CompletableFuture
import java.util.concurrent.ConcurrentHashMap
import java.util.concurrent.PriorityBlockingQueue
import java.util.concurrent.TimeUnit
import java.util.concurrent.atomic.AtomicLong
import java.util.concurrent.locks.ReentrantReadWriteLock
import kotlin.concurrent.withLock
private class CacheKey(private val value: ByteArray) {
override fun equals(other: Any?) = if (other is CacheKey) {
@@ -34,15 +34,17 @@ class InMemoryCache(
private val log = createLogger<InMemoryCache>()
}
private val size = AtomicLong()
private val map = ConcurrentHashMap<CacheKey, CacheEntry>()
private var mapSize : Long = 0
private val map = HashMap<CacheKey, CacheEntry>()
private val lock = ReentrantReadWriteLock()
private val cond = lock.writeLock().newCondition()
private class RemovalQueueElement(val key: CacheKey, val value: CacheEntry, val expiry: Instant) :
Comparable<RemovalQueueElement> {
override fun compareTo(other: RemovalQueueElement) = expiry.compareTo(other.expiry)
}
private val removalQueue = PriorityBlockingQueue<RemovalQueueElement>()
private val removalQueue = PriorityQueue<RemovalQueueElement>()
@Volatile
private var running = true
@@ -51,21 +53,32 @@ class InMemoryCache(
init {
Thread.ofVirtual().name("in-memory-cache-gc").start {
try {
while (running) {
val el = removalQueue.poll(1, TimeUnit.SECONDS) ?: continue
val value = el.value
val now = Instant.now()
if (now > el.expiry) {
val removed = map.remove(el.key, value)
if (removed) {
updateSizeAfterRemoval(value.content)
//Decrease the reference count for map
value.content.release()
lock.writeLock().withLock {
while (running) {
val el = removalQueue.poll()
if(el == null) {
cond.await(1000, TimeUnit.MILLISECONDS)
continue
}
val value = el.value
val now = Instant.now()
if (now > el.expiry) {
val removed = map.remove(el.key, value)
if (removed) {
updateSizeAfterRemoval(value.content)
//Decrease the reference count for map
value.content.release()
}
} else {
removalQueue.offer(el)
val interval = minOf(Duration.between(now, el.expiry), Duration.ofSeconds(1))
cond.await(interval.toMillis(), TimeUnit.MILLISECONDS)
}
} else {
removalQueue.put(el)
Thread.sleep(minOf(Duration.between(now, el.expiry), Duration.ofSeconds(1)))
}
map.forEach {
it.value.content.release()
}
map.clear()
}
complete(null)
} catch (ex: Throwable) {
@@ -77,7 +90,7 @@ class InMemoryCache(
fun removeEldest(): Long {
while (true) {
val el = removalQueue.take()
val el = removalQueue.poll() ?: return mapSize
val value = el.value
val removed = map.remove(el.key, value)
if (removed) {
@@ -90,18 +103,22 @@ class InMemoryCache(
}
private fun updateSizeAfterRemoval(removed: ByteBuf): Long {
return size.updateAndGet { currentSize: Long ->
currentSize - removed.readableBytes()
}
mapSize -= removed.readableBytes()
return mapSize
}
override fun asyncClose() : CompletableFuture<Void> {
running = false
lock.writeLock().withLock {
cond.signal()
}
return closeFuture
}
fun get(key: ByteArray) = map[CacheKey(key)]?.run {
CacheEntry(metadata, content.retainedDuplicate())
fun get(key: ByteArray) = lock.readLock().withLock {
map[CacheKey(key)]?.run {
CacheEntry(metadata, content.retainedDuplicate())
}
}
fun put(
@@ -109,18 +126,18 @@ class InMemoryCache(
value: CacheEntry,
) {
val cacheKey = CacheKey(key)
val oldSize = map.put(cacheKey, value)?.let { old ->
val result = old.content.readableBytes()
old.content.release()
result
} ?: 0
val delta = value.content.readableBytes() - oldSize
var newSize = size.updateAndGet { currentSize: Long ->
currentSize + delta
}
removalQueue.put(RemovalQueueElement(cacheKey, value, Instant.now().plus(maxAge)))
while (newSize > maxSize) {
newSize = removeEldest()
lock.writeLock().withLock {
val oldSize = map.put(cacheKey, value)?.let { old ->
val result = old.content.readableBytes()
old.content.release()
result
} ?: 0
val delta = value.content.readableBytes() - oldSize
mapSize += delta
removalQueue.offer(RemovalQueueElement(cacheKey, value, Instant.now().plus(maxAge)))
while (mapSize > maxSize) {
removeEldest()
}
}
}
}

View File

@@ -4,7 +4,6 @@ import io.netty.channel.ChannelFactory
import io.netty.channel.EventLoopGroup
import io.netty.channel.socket.DatagramChannel
import io.netty.channel.socket.SocketChannel
import io.netty.util.concurrent.Future
import net.woggioni.rbcs.api.CacheHandlerFactory
import net.woggioni.rbcs.api.Configuration
import net.woggioni.rbcs.common.RBCS
@@ -16,7 +15,6 @@ data class InMemoryCacheConfiguration(
val digestAlgorithm : String?,
val compressionEnabled: Boolean,
val compressionLevel: Int,
val chunkSize : Int
) : Configuration.Cache {
override fun materialize() = object : CacheHandlerFactory {
private val cache = InMemoryCache(maxAge, maxSize)
@@ -24,6 +22,7 @@ data class InMemoryCacheConfiguration(
override fun asyncClose() = cache.asyncClose()
override fun newHandler(
cfg : Configuration,
eventLoop: EventLoopGroup,
socketChannelFactory: ChannelFactory<SocketChannel>,
datagramChannelFactory: ChannelFactory<DatagramChannel>

View File

@@ -2,15 +2,9 @@ package net.woggioni.rbcs.server.cache
import io.netty.buffer.ByteBuf
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.SimpleChannelInboundHandler
import net.woggioni.rbcs.api.CacheHandler
import net.woggioni.rbcs.api.message.CacheMessage
import net.woggioni.rbcs.api.message.CacheMessage.CacheContent
import net.woggioni.rbcs.api.message.CacheMessage.CacheGetRequest
import net.woggioni.rbcs.api.message.CacheMessage.CachePutRequest
import net.woggioni.rbcs.api.message.CacheMessage.CachePutResponse
import net.woggioni.rbcs.api.message.CacheMessage.CacheValueFoundResponse
import net.woggioni.rbcs.api.message.CacheMessage.CacheValueNotFoundResponse
import net.woggioni.rbcs.api.message.CacheMessage.LastCacheContent
import net.woggioni.rbcs.api.message.CacheMessage.*
import net.woggioni.rbcs.common.ByteBufOutputStream
import net.woggioni.rbcs.common.RBCS.processCacheKey
import java.util.zip.Deflater
@@ -22,9 +16,17 @@ class InMemoryCacheHandler(
private val digestAlgorithm: String?,
private val compressionEnabled: Boolean,
private val compressionLevel: Int
) : SimpleChannelInboundHandler<CacheMessage>() {
) : CacheHandler() {
private interface InProgressPutRequest : AutoCloseable {
private interface InProgressRequest : AutoCloseable {
}
private class InProgressGetRequest(val request : CacheGetRequest) : InProgressRequest {
override fun close() {
}
}
private interface InProgressPutRequest : InProgressRequest {
val request: CachePutRequest
val buf: ByteBuf
@@ -33,18 +35,14 @@ class InMemoryCacheHandler(
private inner class InProgressPlainPutRequest(ctx: ChannelHandlerContext, override val request: CachePutRequest) :
InProgressPutRequest {
override val buf = ctx.alloc().compositeBuffer()
private val stream = ByteBufOutputStream(buf).let {
if (compressionEnabled) {
DeflaterOutputStream(it, Deflater(compressionLevel))
} else {
it
}
}
override val buf = ctx.alloc().compositeHeapBuffer()
override fun append(buf: ByteBuf) {
this.buf.addComponent(true, buf.retain())
if(buf.isDirect) {
this.buf.writeBytes(buf)
} else {
this.buf.addComponent(true, buf.retain())
}
}
override fun close() {
@@ -72,7 +70,7 @@ class InMemoryCacheHandler(
}
}
private var inProgressPutRequest: InProgressPutRequest? = null
private var inProgressRequest: InProgressRequest? = null
override fun channelRead0(ctx: ChannelHandlerContext, msg: CacheMessage) {
when (msg) {
@@ -85,24 +83,11 @@ class InMemoryCacheHandler(
}
private fun handleGetRequest(ctx: ChannelHandlerContext, msg: CacheGetRequest) {
cache.get(processCacheKey(msg.key, digestAlgorithm))?.let { value ->
ctx.writeAndFlush(CacheValueFoundResponse(msg.key, value.metadata))
if (compressionEnabled) {
val buf = ctx.alloc().heapBuffer()
InflaterOutputStream(ByteBufOutputStream(buf)).use {
value.content.readBytes(it, value.content.readableBytes())
value.content.release()
buf.retain()
}
ctx.writeAndFlush(LastCacheContent(buf))
} else {
ctx.writeAndFlush(LastCacheContent(value.content))
}
} ?: ctx.writeAndFlush(CacheValueNotFoundResponse())
inProgressRequest = InProgressGetRequest(msg)
}
private fun handlePutRequest(ctx: ChannelHandlerContext, msg: CachePutRequest) {
inProgressPutRequest = if(compressionEnabled) {
inProgressRequest = if(compressionEnabled) {
InProgressCompressedPutRequest(ctx, msg)
} else {
InProgressPlainPutRequest(ctx, msg)
@@ -110,27 +95,46 @@ class InMemoryCacheHandler(
}
private fun handleCacheContent(ctx: ChannelHandlerContext, msg: CacheContent) {
inProgressPutRequest?.append(msg.content())
val req = inProgressRequest
if(req is InProgressPutRequest) {
req.append(msg.content())
}
}
private fun handleLastCacheContent(ctx: ChannelHandlerContext, msg: LastCacheContent) {
handleCacheContent(ctx, msg)
inProgressPutRequest?.let { inProgressRequest ->
inProgressPutRequest = null
val buf = inProgressRequest.buf
buf.retain()
inProgressRequest.close()
val cacheKey = processCacheKey(inProgressRequest.request.key, digestAlgorithm)
cache.put(cacheKey, CacheEntry(inProgressRequest.request.metadata, buf))
ctx.writeAndFlush(CachePutResponse(inProgressRequest.request.key))
when(val req = inProgressRequest) {
is InProgressGetRequest -> {
cache.get(processCacheKey(req.request.key, digestAlgorithm))?.let { value ->
sendMessageAndFlush(ctx, CacheValueFoundResponse(req.request.key, value.metadata))
if (compressionEnabled) {
val buf = ctx.alloc().heapBuffer()
InflaterOutputStream(ByteBufOutputStream(buf)).use {
value.content.readBytes(it, value.content.readableBytes())
value.content.release()
buf.retain()
}
sendMessage(ctx, LastCacheContent(buf))
} else {
sendMessage(ctx, LastCacheContent(value.content))
}
} ?: sendMessage(ctx, CacheValueNotFoundResponse())
}
is InProgressPutRequest -> {
this.inProgressRequest = null
val buf = req.buf
buf.retain()
req.close()
val cacheKey = processCacheKey(req.request.key, digestAlgorithm)
cache.put(cacheKey, CacheEntry(req.request.metadata, buf))
sendMessageAndFlush(ctx, CachePutResponse(req.request.key))
}
}
}
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
inProgressPutRequest?.let { req ->
req.buf.release()
inProgressPutRequest = null
}
inProgressRequest?.close()
inProgressRequest = null
super.exceptionCaught(ctx, cause)
}
}

View File

@@ -31,16 +31,12 @@ class InMemoryCacheProvider : CacheProvider<InMemoryCacheConfiguration> {
?.let(String::toInt)
?: Deflater.DEFAULT_COMPRESSION
val digestAlgorithm = el.renderAttribute("digest")
val chunkSize = el.renderAttribute("chunk-size")
?.let(Integer::decode)
?: 0x10000
return InMemoryCacheConfiguration(
maxAge,
maxSize,
digestAlgorithm,
enableCompression,
compressionLevel,
chunkSize
)
}
@@ -60,7 +56,6 @@ class InMemoryCacheProvider : CacheProvider<InMemoryCacheConfiguration> {
}?.let {
attr("compression-level", it.toString())
}
attr("chunk-size", chunkSize.toString())
}
result
}

View File

@@ -27,10 +27,11 @@ object Parser {
val root = document.documentElement
val anonymousUser = User("", null, emptySet(), null)
var connection: Configuration.Connection = Configuration.Connection(
Duration.of(30, ChronoUnit.SECONDS),
Duration.of(60, ChronoUnit.SECONDS),
Duration.of(30, ChronoUnit.SECONDS),
Duration.of(30, ChronoUnit.SECONDS),
67108864
Duration.of(60, ChronoUnit.SECONDS),
0x4000000,
0x10000
)
var eventExecutor: Configuration.EventExecutor = Configuration.EventExecutor(true)
var cache: Cache? = null
@@ -119,11 +120,14 @@ object Parser {
?.let(Duration::parse) ?: Duration.of(60, ChronoUnit.SECONDS)
val maxRequestSize = child.renderAttribute("max-request-size")
?.let(Integer::decode) ?: 0x4000000
val chunkSize = child.renderAttribute("chunk-size")
?.let(Integer::decode) ?: 0x10000
connection = Configuration.Connection(
idleTimeout,
readIdleTimeout,
writeIdleTimeout,
maxRequestSize
maxRequestSize,
chunkSize
)
}

View File

@@ -40,6 +40,7 @@ object Serializer {
attr("read-idle-timeout", connection.readIdleTimeout.toString())
attr("write-idle-timeout", connection.writeIdleTimeout.toString())
attr("max-request-size", connection.maxRequestSize.toString())
attr("chunk-size", connection.chunkSize.toString())
}
}
node("event-executor") {

View File

@@ -27,6 +27,9 @@ import javax.net.ssl.SSLPeerUnverifiedException
@Sharable
object ExceptionHandler : ChannelDuplexHandler() {
val NAME : String = this::class.java.name
private val log = contextLogger()
private val NOT_AUTHORIZED: FullHttpResponse = DefaultFullHttpResponse(

View File

@@ -0,0 +1,13 @@
package net.woggioni.rbcs.server.handler
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.SimpleChannelInboundHandler
import io.netty.handler.codec.http.HttpContent
class BlackHoleRequestHandler : SimpleChannelInboundHandler<HttpContent>() {
companion object {
val NAME = BlackHoleRequestHandler::class.java.name
}
override fun channelRead0(ctx: ChannelHandlerContext, msg: HttpContent) {
}
}

View File

@@ -1,28 +0,0 @@
package net.woggioni.rbcs.server.handler
import io.netty.channel.ChannelHandler.Sharable
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.SimpleChannelInboundHandler
import io.netty.handler.codec.http.HttpContent
import io.netty.handler.codec.http.LastHttpContent
import net.woggioni.rbcs.api.message.CacheMessage.CacheContent
import net.woggioni.rbcs.api.message.CacheMessage.LastCacheContent
@Sharable
object CacheContentHandler : SimpleChannelInboundHandler<HttpContent>() {
val NAME = this::class.java.name
override fun channelRead0(ctx: ChannelHandlerContext, msg: HttpContent) {
when(msg) {
is LastHttpContent -> {
ctx.fireChannelRead(LastCacheContent(msg.content().retain()))
ctx.pipeline().remove(this)
}
else -> ctx.fireChannelRead(CacheContent(msg.content().retain()))
}
}
override fun exceptionCaught(ctx: ChannelHandlerContext?, cause: Throwable?) {
super.exceptionCaught(ctx, cause)
}
}

View File

@@ -1,12 +1,14 @@
package net.woggioni.rbcs.server.handler
import io.netty.channel.ChannelDuplexHandler
import io.netty.channel.ChannelHandler
import io.netty.channel.ChannelHandlerContext
import io.netty.channel.ChannelPromise
import io.netty.handler.codec.http.DefaultFullHttpResponse
import io.netty.handler.codec.http.DefaultHttpContent
import io.netty.handler.codec.http.DefaultHttpResponse
import io.netty.handler.codec.http.DefaultLastHttpContent
import io.netty.handler.codec.http.HttpContent
import io.netty.handler.codec.http.HttpHeaderNames
import io.netty.handler.codec.http.HttpHeaderValues
import io.netty.handler.codec.http.HttpHeaders
@@ -15,6 +17,7 @@ import io.netty.handler.codec.http.HttpRequest
import io.netty.handler.codec.http.HttpResponseStatus
import io.netty.handler.codec.http.HttpUtil
import io.netty.handler.codec.http.HttpVersion
import io.netty.handler.codec.http.LastHttpContent
import net.woggioni.rbcs.api.CacheValueMetadata
import net.woggioni.rbcs.api.message.CacheMessage
import net.woggioni.rbcs.api.message.CacheMessage.CacheContent
@@ -27,19 +30,29 @@ import net.woggioni.rbcs.api.message.CacheMessage.LastCacheContent
import net.woggioni.rbcs.common.createLogger
import net.woggioni.rbcs.common.debug
import net.woggioni.rbcs.common.warn
import net.woggioni.rbcs.server.exception.ExceptionHandler
import java.nio.file.Path
import java.util.Locale
class ServerHandler(private val serverPrefix: Path) :
class ServerHandler(private val serverPrefix: Path, private val cacheHandlerSupplier : () -> ChannelHandler) :
ChannelDuplexHandler() {
companion object {
private val log = createLogger<ServerHandler>()
val NAME = this::class.java.name
val NAME = ServerHandler::class.java.name
}
private var httpVersion = HttpVersion.HTTP_1_1
private var keepAlive = true
private var pipelinedRequests = 0
private fun newRequest() {
pipelinedRequests += 1
}
private fun requestCompleted(ctx : ChannelHandlerContext) {
pipelinedRequests -= 1
if(pipelinedRequests == 0) ctx.read()
}
private fun resetRequestMetadata() {
httpVersion = HttpVersion.HTTP_1_1
@@ -59,14 +72,38 @@ class ServerHandler(private val serverPrefix: Path) :
}
}
private var cacheRequestInProgress : Boolean = false
override fun handlerAdded(ctx: ChannelHandlerContext) {
ctx.read()
}
override fun channelRead(ctx: ChannelHandlerContext, msg: Any) {
when (msg) {
is HttpRequest -> handleRequest(ctx, msg)
is HttpContent -> {
if(cacheRequestInProgress) {
if(msg is LastHttpContent) {
super.channelRead(ctx, LastCacheContent(msg.content().retain()))
cacheRequestInProgress = false
} else {
super.channelRead(ctx, CacheContent(msg.content().retain()))
}
msg.release()
} else {
super.channelRead(ctx, msg)
}
}
else -> super.channelRead(ctx, msg)
}
}
override fun channelReadComplete(ctx: ChannelHandlerContext) {
super.channelReadComplete(ctx)
if(cacheRequestInProgress) {
ctx.read()
}
}
override fun write(ctx: ChannelHandlerContext, msg: Any, promise: ChannelPromise?) {
if (msg is CacheMessage) {
@@ -84,14 +121,18 @@ class ServerHandler(private val serverPrefix: Path) :
val buf = ctx.alloc().buffer(keyBytes.size).apply {
writeBytes(keyBytes)
}
ctx.writeAndFlush(DefaultLastHttpContent(buf))
ctx.writeAndFlush(DefaultLastHttpContent(buf)).also {
requestCompleted(ctx)
}
}
is CacheValueNotFoundResponse -> {
val response = DefaultFullHttpResponse(httpVersion, HttpResponseStatus.NOT_FOUND)
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = 0
setKeepAliveHeader(response.headers())
ctx.writeAndFlush(response)
ctx.writeAndFlush(response).also {
requestCompleted(ctx)
}
}
is CacheValueFoundResponse -> {
@@ -108,7 +149,9 @@ class ServerHandler(private val serverPrefix: Path) :
}
is LastCacheContent -> {
ctx.writeAndFlush(DefaultLastHttpContent(msg.content()))
ctx.writeAndFlush(DefaultLastHttpContent(msg.content())).also {
requestCompleted(ctx)
}
}
is CacheContent -> {
@@ -127,6 +170,9 @@ class ServerHandler(private val serverPrefix: Path) :
} finally {
resetRequestMetadata()
}
} else if(msg is LastHttpContent) {
ctx.write(msg, promise)
requestCompleted(ctx)
} else super.write(ctx, msg, promise)
}
@@ -137,9 +183,12 @@ class ServerHandler(private val serverPrefix: Path) :
if (method === HttpMethod.GET) {
val path = Path.of(msg.uri()).normalize()
if (path.startsWith(serverPrefix)) {
cacheRequestInProgress = true
val relativePath = serverPrefix.relativize(path)
val key = relativePath.toString()
ctx.pipeline().addAfter(NAME, CacheContentHandler.NAME, CacheContentHandler)
val key : String = relativePath.toString()
newRequest()
val cacheHandler = cacheHandlerSupplier()
ctx.pipeline().addBefore(ExceptionHandler.NAME, null, cacheHandler)
key.let(::CacheGetRequest)
.let(ctx::fireChannelRead)
?: ctx.channel().write(CacheValueNotFoundResponse())
@@ -154,12 +203,16 @@ class ServerHandler(private val serverPrefix: Path) :
} else if (method === HttpMethod.PUT) {
val path = Path.of(msg.uri()).normalize()
if (path.startsWith(serverPrefix)) {
cacheRequestInProgress = true
val relativePath = serverPrefix.relativize(path)
val key = relativePath.toString()
log.debug(ctx) {
"Added value for key '$key' to build cache"
}
ctx.pipeline().addAfter(NAME, CacheContentHandler.NAME, CacheContentHandler)
newRequest()
val cacheHandler = cacheHandlerSupplier()
ctx.pipeline().addBefore(ExceptionHandler.NAME, null, cacheHandler)
path.fileName?.toString()
?.let {
val mimeType = HttpUtil.getMimeType(msg)?.toString()
@@ -176,6 +229,8 @@ class ServerHandler(private val serverPrefix: Path) :
ctx.writeAndFlush(response)
}
} else if (method == HttpMethod.TRACE) {
newRequest()
ctx.pipeline().addBefore(ExceptionHandler.NAME, null, TraceHandler)
super.channelRead(ctx, msg)
} else {
log.warn(ctx) {
@@ -187,42 +242,6 @@ class ServerHandler(private val serverPrefix: Path) :
}
}
data class ContentDisposition(val type: Type?, val fileName: String?) {
enum class Type {
attachment, `inline`;
companion object {
@JvmStatic
fun parse(maybeString: String?) = maybeString.let { s ->
try {
java.lang.Enum.valueOf(Type::class.java, s)
} catch (ex: IllegalArgumentException) {
null
}
}
}
}
companion object {
@JvmStatic
fun parse(contentDisposition: String) : ContentDisposition {
val parts = contentDisposition.split(";").dropLastWhile { it.isEmpty() }.toTypedArray()
val dispositionType = parts[0].trim { it <= ' ' }.let(Type::parse) // Get the type (e.g., attachment)
var filename: String? = null
for (i in 1..<parts.size) {
val part = parts[i].trim { it <= ' ' }
if (part.lowercase(Locale.getDefault()).startsWith("filename=")) {
filename = part.substring("filename=".length).trim { it <= ' ' }.replace("\"", "")
break
}
}
return ContentDisposition(dispositionType, filename)
}
}
}
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
super.exceptionCaught(ctx, cause)
}

View File

@@ -42,6 +42,7 @@ object TraceHandler : ChannelInboundHandlerAdapter() {
}
is LastHttpContent -> {
ctx.writeAndFlush(msg)
ctx.pipeline().remove(this)
}
is HttpContent -> ctx.writeAndFlush(msg)
else -> super.channelRead(ctx, msg)

View File

@@ -94,6 +94,9 @@ class ThrottlingHandler(private val bucketManager : BucketManager,
handleBuckets(buckets, ctx, msg, false)
}, waitDuration.toMillis(), TimeUnit.MILLISECONDS)
} else {
queuedContent?.let { qc ->
qc.forEach { it.release() }
}
this.queuedContent = null
sendThrottledResponse(ctx, waitDuration)
}

View File

@@ -115,6 +115,14 @@
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="chunk-size" type="rbcs:byteSizeType" default="0x10000">
<xs:annotation>
<xs:documentation>
Maximum byte size of socket write calls
(reduce it to reduce memory consumption, increase it for increased throughput)
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:complexType>
<xs:complexType name="eventExecutorType">
@@ -175,13 +183,6 @@
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="chunk-size" type="rbcs:byteSizeType" default="0x10000">
<xs:annotation>
<xs:documentation>
Maximum byte size of socket write calls
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:extension>
</xs:complexContent>
</xs:complexType>
@@ -231,14 +232,6 @@
</xs:documentation>
</xs:annotation>
</xs:attribute>
<xs:attribute name="chunk-size" type="rbcs:byteSizeType" default="0x10000">
<xs:annotation>
<xs:documentation>
Maximum byte size of a cache value that will be stored in memory
(reduce it to reduce memory consumption, increase it for increased throughput)
</xs:documentation>
</xs:annotation>
</xs:attribute>
</xs:extension>
</xs:complexContent>
</xs:complexType>

View File

@@ -1,5 +1,14 @@
package net.woggioni.rbcs.server.test.utils;
import java.math.BigInteger;
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import java.security.PrivateKey;
import java.security.SecureRandom;
import java.security.cert.X509Certificate;
import java.time.Instant;
import java.time.temporal.ChronoUnit;
import java.util.Date;
import org.bouncycastle.asn1.DERSequence;
import org.bouncycastle.asn1.x500.X500Name;
import org.bouncycastle.asn1.x509.BasicConstraints;
@@ -15,16 +24,6 @@ import org.bouncycastle.cert.jcajce.JcaX509v3CertificateBuilder;
import org.bouncycastle.operator.ContentSigner;
import org.bouncycastle.operator.jcajce.JcaContentSignerBuilder;
import java.math.BigInteger;
import java.security.KeyPair;
import java.security.KeyPairGenerator;
import java.security.PrivateKey;
import java.security.SecureRandom;
import java.security.cert.X509Certificate;
import java.time.Instant;
import java.time.temporal.ChronoUnit;
import java.util.Date;
public class CertificateUtils {
public record X509Credentials(

View File

@@ -41,7 +41,8 @@ abstract class AbstractBasicAuthServerTest : AbstractServerTest() {
Duration.of(60, ChronoUnit.SECONDS),
Duration.of(30, ChronoUnit.SECONDS),
Duration.of(30, ChronoUnit.SECONDS),
0x1000
0x1000,
0x10000
),
users.asSequence().map { it.name to it}.toMap(),
sequenceOf(writersGroup, readersGroup).map { it.name to it}.toMap(),
@@ -50,8 +51,7 @@ abstract class AbstractBasicAuthServerTest : AbstractServerTest() {
maxAge = Duration.ofSeconds(3600 * 24),
digestAlgorithm = "MD5",
compressionLevel = Deflater.DEFAULT_COMPRESSION,
compressionEnabled = false,
chunkSize = 0x1000
compressionEnabled = false
),
Configuration.BasicAuthentication(),
null,

View File

@@ -147,7 +147,8 @@ abstract class AbstractTlsServerTest : AbstractServerTest() {
Duration.of(60, ChronoUnit.SECONDS),
Duration.of(30, ChronoUnit.SECONDS),
Duration.of(30, ChronoUnit.SECONDS),
0x1000
0x1000,
0x10000
),
users.asSequence().map { it.name to it }.toMap(),
sequenceOf(writersGroup, readersGroup).map { it.name to it }.toMap(),
@@ -156,7 +157,6 @@ abstract class AbstractTlsServerTest : AbstractServerTest() {
compressionEnabled = false,
compressionLevel = Deflater.DEFAULT_COMPRESSION,
digestAlgorithm = "MD5",
chunkSize = 0x1000
),
// InMemoryCacheConfiguration(
// maxAge = Duration.ofSeconds(3600 * 24),

View File

@@ -154,7 +154,7 @@ class BasicAuthServerTest : AbstractBasicAuthServerTest() {
}
@Test
@Order(6)
@Order(8)
fun getAsAThrottledUser() {
val client: HttpClient = HttpClient.newHttpClient()
@@ -172,7 +172,7 @@ class BasicAuthServerTest : AbstractBasicAuthServerTest() {
}
@Test
@Order(7)
@Order(9)
fun getAsAThrottledUser2() {
val client: HttpClient = HttpClient.newHttpClient()

View File

@@ -41,7 +41,8 @@ class NoAuthServerTest : AbstractServerTest() {
Duration.of(60, ChronoUnit.SECONDS),
Duration.of(30, ChronoUnit.SECONDS),
Duration.of(30, ChronoUnit.SECONDS),
0x1000
0x1000,
0x10000
),
emptyMap(),
emptyMap(),
@@ -51,7 +52,6 @@ class NoAuthServerTest : AbstractServerTest() {
digestAlgorithm = "MD5",
compressionLevel = Deflater.DEFAULT_COMPRESSION,
maxSize = 0x1000000,
chunkSize = 0x1000
),
null,
null,

View File

@@ -166,4 +166,17 @@ class TlsServerTest : AbstractTlsServerTest() {
Assertions.assertEquals(HttpResponseStatus.OK.code(), response.statusCode())
println(String(response.body()))
}
@Test
@Order(10)
fun putAsUnknownUserUser() {
val (key, value) = keyValuePair
val client: HttpClient = getHttpClient(getClientKeyStore(ca, X500Name("CN=Unknown user")))
val requestBuilder = newRequestBuilder(key)
.header("Content-Type", "application/octet-stream")
.PUT(HttpRequest.BodyPublishers.ofByteArray(value))
val response: HttpResponse<String> = client.send(requestBuilder.build(), HttpResponse.BodyHandlers.ofString())
Assertions.assertEquals(HttpResponseStatus.INTERNAL_SERVER_ERROR.code(), response.statusCode())
}
}

View File

@@ -7,9 +7,10 @@
read-idle-timeout="PT10M"
write-idle-timeout="PT11M"
idle-timeout="PT30M"
max-request-size="101325"/>
max-request-size="101325"
chunk-size="0xa910"/>
<event-executor use-virtual-threads="false"/>
<cache xs:type="rbcs:fileSystemCacheType" path="/tmp/rbcs" max-age="P7D" chunk-size="0xa910"/>
<cache xs:type="rbcs:fileSystemCacheType" path="/tmp/rbcs" max-age="P7D"/>
<authentication>
<none/>
</authentication>

View File

@@ -9,9 +9,10 @@
max-request-size="67108864"
idle-timeout="PT30S"
read-idle-timeout="PT60S"
write-idle-timeout="PT60S"/>
write-idle-timeout="PT60S"
chunk-size="123"/>
<event-executor use-virtual-threads="true"/>
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" chunk-size="123">
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D">
<server host="memcached" port="11211"/>
</cache>
<authorization>

View File

@@ -8,9 +8,10 @@
read-idle-timeout="PT10M"
write-idle-timeout="PT11M"
idle-timeout="PT30M"
max-request-size="101325"/>
max-request-size="101325"
chunk-size="456"/>
<event-executor use-virtual-threads="false"/>
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" digest="SHA-256" chunk-size="456" compression-mode="deflate" compression-level="7">
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" digest="SHA-256" compression-mode="deflate" compression-level="7">
<server host="127.0.0.1" port="11211" max-connections="10" connection-timeout="PT20S"/>
</cache>
<authentication>

View File

@@ -7,9 +7,10 @@
read-idle-timeout="PT10M"
write-idle-timeout="PT11M"
idle-timeout="PT30M"
max-request-size="4096"/>
max-request-size="4096"
chunk-size="0xa91f"/>
<event-executor use-virtual-threads="false"/>
<cache xs:type="rbcs:inMemoryCacheType" max-age="P7D" chunk-size="0xa91f"/>
<cache xs:type="rbcs:inMemoryCacheType" max-age="P7D"/>
<authorization>
<users>
<user name="user1" password="password1">