Compare commits
18 Commits
Author | SHA1 | Date | |
---|---|---|---|
222b475223
|
|||
ede515e2ca
|
|||
974fdb7a91
|
|||
a294229ff0
|
|||
9600dd7e4f
|
|||
729276a2b1
|
|||
7ba7070693
|
|||
59a12d6218
|
|||
fc298de548
|
|||
8b639fc0b3
|
|||
5545f618f9
|
|||
43c0938d9a
|
|||
17215b401a
|
|||
4aced1c717
|
|||
31ce34cddb
|
|||
d64f7f4f27
|
|||
d15235fc4c
|
|||
49bb4f41b8
|
150
README.md
150
README.md
@@ -1,4 +1,45 @@
|
|||||||
# Remote Build Cache Server
|
# Remote Build Cache Server
|
||||||
|
|
||||||
|

|
||||||
|

|
||||||
|

|
||||||
|

|
||||||
|
|
||||||
|
<!--
|
||||||
|

|
||||||
|
-->
|
||||||
|
|
||||||
|
Speed up your builds by sharing and reusing unchanged build outputs across your team.
|
||||||
|
|
||||||
|
Remote Build Cache Server (RBCS) allows teams to share and reuse unchanged build and test outputs,
|
||||||
|
significantly reducing build times for both local and CI environments. By eliminating redundant work,
|
||||||
|
RBCS helps teams become more productive and efficient.
|
||||||
|
|
||||||
|
**Key Features:**
|
||||||
|
- Support for both Gradle and Maven build environments
|
||||||
|
- Pluggable storage backends (in-memory, disk-backed, memcached)
|
||||||
|
- Flexible authentication (HTTP basic or TLS certificate)
|
||||||
|
- Role-based access control
|
||||||
|
- Request throttling
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
- [Quickstart](#quickstart)
|
||||||
|
- [Integration with build tools](#integration-with-build-tools)
|
||||||
|
- [Use RBCS with Gradle](#use-rbcs-with-gradle)
|
||||||
|
- [Use RBCS with Maven](#use-rbcs-with-maven)
|
||||||
|
- [Server configuration](#server-configuration)
|
||||||
|
- [Authentication](#authentication)
|
||||||
|
- [HTTP Basic authentication](#configure-http-basic-authentication)
|
||||||
|
- [TLS client certificate authentication](#configure-tls-certificate-authentication)
|
||||||
|
- [Authentication & Access Control](#access-control)
|
||||||
|
- [Plugins](#plugins)
|
||||||
|
- [Client Tools](#rbcs-client)
|
||||||
|
- [Logging](#logging)
|
||||||
|
- [Performance](#performance)
|
||||||
|
- [FAQ](#faq)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Remote Build Cache Server (shortened to RBCS) allows you to share and reuse unchanged build
|
Remote Build Cache Server (shortened to RBCS) allows you to share and reuse unchanged build
|
||||||
and test outputs across the team. This speeds up local and CI builds since cycles are not wasted
|
and test outputs across the team. This speeds up local and CI builds since cycles are not wasted
|
||||||
re-building components that are unaffected by new code changes. RBCS supports both Gradle and
|
re-building components that are unaffected by new code changes. RBCS supports both Gradle and
|
||||||
@@ -12,7 +53,7 @@ and throttling.
|
|||||||
|
|
||||||
## Quickstart
|
## Quickstart
|
||||||
|
|
||||||
### Downloading the jar file
|
### Use the all-in-one jar file
|
||||||
You can download the latest version from [this link](https://gitea.woggioni.net/woggioni/-/packages/maven/net.woggioni:rbcs-cli/)
|
You can download the latest version from [this link](https://gitea.woggioni.net/woggioni/-/packages/maven/net.woggioni:rbcs-cli/)
|
||||||
|
|
||||||
|
|
||||||
@@ -25,7 +66,7 @@ java -jar rbcs-cli.jar server
|
|||||||
By default it will start an HTTP server bound to localhost and listening on port 8080 with no authentication,
|
By default it will start an HTTP server bound to localhost and listening on port 8080 with no authentication,
|
||||||
writing data to the disk, that you can use for testing
|
writing data to the disk, that you can use for testing
|
||||||
|
|
||||||
### Using the Docker image
|
### Use the Docker image
|
||||||
You can pull the latest Docker image with
|
You can pull the latest Docker image with
|
||||||
```bash
|
```bash
|
||||||
docker pull gitea.woggioni.net/woggioni/rbcs:latest
|
docker pull gitea.woggioni.net/woggioni/rbcs:latest
|
||||||
@@ -34,41 +75,20 @@ docker pull gitea.woggioni.net/woggioni/rbcs:latest
|
|||||||
By default it will start an HTTP server bound to localhost and listening on port 8080 with no authentication,
|
By default it will start an HTTP server bound to localhost and listening on port 8080 with no authentication,
|
||||||
writing data to the disk, that you can use for testing
|
writing data to the disk, that you can use for testing
|
||||||
|
|
||||||
### Using the native executable
|
### Use the native executable
|
||||||
If you are on a Linux X86_64 machine you can download the native executable
|
If you are on a Linux X86_64 machine you can download the native executable
|
||||||
from [here](https://gitea.woggioni.net/woggioni/-/packages/maven/net.woggioni:rbcs-cli/).
|
from [here](https://gitea.woggioni.net/woggioni/-/packages/maven/net.woggioni:rbcs-cli/).
|
||||||
It behaves the same as the jar file but it doesn't require a JVM and it has faster startup times.
|
It behaves the same as the jar file but it doesn't require a JVM and it has faster startup times.
|
||||||
becausue of GraalVm's [closed-world assumption](https://www.graalvm.org/latest/reference-manual/native-image/basics/#static-analysis),
|
because of GraalVM's [closed-world assumption](https://www.graalvm.org/latest/reference-manual/native-image/basics/#static-analysis),
|
||||||
the native executable does not supports plugins, so it comes with all plugins embedded into it.
|
the native executable does not supports plugins, so it comes with all plugins embedded into it.
|
||||||
|
|
||||||
## Usage
|
> [!WARNING]
|
||||||
|
> The native executable is built with `-march=skylake`, so it may fail with SIGILL on x86 CPUs that do not support
|
||||||
|
> the full skylake instruction set (as a rule of thumb, older than 2015)
|
||||||
|
|
||||||
### Configuration
|
## Integration with build tools
|
||||||
The location of the `rbcs-server.xml` configuration file depends on the operating system,
|
|
||||||
Alternatively it can be changed setting the `RBCS_CONFIGURATION_DIR` environmental variable or `net.woggioni.rbcs.conf.dir` Java system property
|
|
||||||
to the directory that contain the `rbcs-server.xml` file.
|
|
||||||
|
|
||||||
The server configuration file follows the XML format and uses XML schema for validation
|
### Use RBCS with Gradle
|
||||||
(you can find the schema for the main configuration file [here](https://gitea.woggioni.net/woggioni/rbcs/src/branch/master/rbcs-server/src/main/resources/net/woggioni/rbcs/server/schema/rbcs-server.xsd)).
|
|
||||||
|
|
||||||
The configuration values are enclosed inside XML attribute and support system property / environmental variable interpolation.
|
|
||||||
As an example, you can configure RBCS to read the server port number from the `RBCS_SERVER_PORT` environmental variable
|
|
||||||
and the bind address from the `rbc.bind.address` JVM system property with
|
|
||||||
|
|
||||||
```xml
|
|
||||||
<bind host="${sys:rpc.bind.address}" port="${env:RBCS_SERVER_PORT}"/>
|
|
||||||
```
|
|
||||||
|
|
||||||
Full documentation for all tags and attributes is available [here](doc/server_configuration.md).
|
|
||||||
|
|
||||||
### Plugins
|
|
||||||
If you want to use memcache as a storage backend you'll also need to download [the memcache plugin](https://gitea.woggioni.net/woggioni/-/packages/maven/net.woggioni:rbcs-server-memcache/)
|
|
||||||
|
|
||||||
Plugins need to be stored in a folder named `plugins` in the located server's working directory
|
|
||||||
(the directory where the server process is started). They are shipped as TAR archives, so you need to extract
|
|
||||||
the content of the archive into the `plugins` directory for the server to pick them up.
|
|
||||||
|
|
||||||
### Using RBCS with Gradle
|
|
||||||
|
|
||||||
Add this to the `settings.gradle` file of your project
|
Add this to the `settings.gradle` file of your project
|
||||||
|
|
||||||
@@ -113,7 +133,7 @@ add `org.gradle.caching=true` to your `<project>/gradle.properties` or run gradl
|
|||||||
|
|
||||||
Read [Gradle documentation](https://docs.gradle.org/current/userguide/build_cache.html) for more detailed information.
|
Read [Gradle documentation](https://docs.gradle.org/current/userguide/build_cache.html) for more detailed information.
|
||||||
|
|
||||||
### Using RBCS with Maven
|
### Use RBCS with Maven
|
||||||
|
|
||||||
1. Create an `extensions.xml` in `<project>/.mvn/extensions.xml` with the following content
|
1. Create an `extensions.xml` in `<project>/.mvn/extensions.xml` with the following content
|
||||||
```xml
|
```xml
|
||||||
@@ -143,6 +163,46 @@ Alternatively you can set those properties in your `<project>/pom.xml`
|
|||||||
Read [here](https://maven.apache.org/extensions/maven-build-cache-extension/remote-cache.html)
|
Read [here](https://maven.apache.org/extensions/maven-build-cache-extension/remote-cache.html)
|
||||||
for more informations
|
for more informations
|
||||||
|
|
||||||
|
|
||||||
|
## Server configuration
|
||||||
|
RBCS reads an XML configuration file, by default named `rbcs-server.xml`.
|
||||||
|
The expected location of the `rbcs-server.xml` file depends on the operating system,
|
||||||
|
if the configuration file is not found a default one will be created and its location is printed
|
||||||
|
on the console
|
||||||
|
|
||||||
|
```bash
|
||||||
|
user@76a90cbcd75d:~$ rbcs-cli server
|
||||||
|
2025-01-01 00:00:00,000 [INFO ] (main) n.w.r.c.impl.commands.ServerCommand -- Creating default configuration file at '/home/user/.config/rbcs/rbcs-server.xml'
|
||||||
|
```
|
||||||
|
|
||||||
|
Alternatively it can be changed setting the `RBCS_CONFIGURATION_DIR` environmental variable or `net.woggioni.rbcs.conf.dir`
|
||||||
|
Java system property to the directory that contain the `rbcs-server.xml` file.
|
||||||
|
It can also be directly specified from the command line with
|
||||||
|
```bash
|
||||||
|
java -jar rbcs-cli.jar server -c /path/to/rbcs-server.xml
|
||||||
|
```
|
||||||
|
|
||||||
|
The server configuration file follows the XML format and uses XML schema for validation
|
||||||
|
(you can find the schema for the `rbcs-server.xml` configuration file [here](https://gitea.woggioni.net/woggioni/rbcs/src/branch/master/rbcs-server/src/main/resources/net/woggioni/rbcs/server/schema/rbcs-server.xsd)).
|
||||||
|
|
||||||
|
The configuration values are enclosed inside XML attribute and support system property / environmental variable interpolation.
|
||||||
|
As an example, you can configure RBCS to read the server port number from the `RBCS_SERVER_PORT` environmental variable
|
||||||
|
and the bind address from the `rbc.bind.address` JVM system property with
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<bind host="${sys:rpc.bind.address}" port="${env:RBCS_SERVER_PORT}"/>
|
||||||
|
```
|
||||||
|
|
||||||
|
Full documentation for all tags and attributes and configuration file examples
|
||||||
|
are available [here](doc/server_configuration.md).
|
||||||
|
|
||||||
|
### Plugins
|
||||||
|
If you want to use memcache as a storage backend you'll also need to download [the memcache plugin](https://gitea.woggioni.net/woggioni/-/packages/maven/net.woggioni:rbcs-server-memcache/)
|
||||||
|
|
||||||
|
Plugins need to be stored in a folder named `plugins` in the located server's working directory
|
||||||
|
(the directory where the server process is started). They are shipped as TAR archives, so you need to extract
|
||||||
|
the content of the archive into the `plugins` directory for the server to pick them up.
|
||||||
|
|
||||||
## Authentication
|
## Authentication
|
||||||
|
|
||||||
RBCS supports 2 authentication mechanisms:
|
RBCS supports 2 authentication mechanisms:
|
||||||
@@ -250,7 +310,11 @@ as a health check (mind you need to have `Healthcheck` role in order to perform
|
|||||||
|
|
||||||
RBCS ships with a command line client that can be used for testing, benchmarking or to manually
|
RBCS ships with a command line client that can be used for testing, benchmarking or to manually
|
||||||
upload/download files to the cache. It must be configured with the `rbcs-client.xml`,
|
upload/download files to the cache. It must be configured with the `rbcs-client.xml`,
|
||||||
whose location follows the same logic of the `rbcs-server.xml`
|
whose location follows the same logic of the `rbcs-server.xml`.
|
||||||
|
The `rbcs-client.xml` must adhere to the [rbcs-client.xsd](rbcs-client/src/main/resources/net/woggioni/rbcs/client/schema/rbcs-client.xsd)
|
||||||
|
XML schema
|
||||||
|
|
||||||
|
The documentation for the `rbcs-client.xml` configuration file is available [here](conf/client_configuration.md)
|
||||||
|
|
||||||
### GET command
|
### GET command
|
||||||
|
|
||||||
@@ -263,6 +327,24 @@ java -jar rbcs-cli.jar client -p $CLIENT_PROFILE_NAME get -k $CACHE_KEY -v $FILE
|
|||||||
```bash
|
```bash
|
||||||
java -jar rbcs-cli.jar client -p $CLIENT_PROFILE_NAME put -k $CACHE_KEY -v $FILE_TO_BE_UPLOADED
|
java -jar rbcs-cli.jar client -p $CLIENT_PROFILE_NAME put -k $CACHE_KEY -v $FILE_TO_BE_UPLOADED
|
||||||
```
|
```
|
||||||
|
|
||||||
|
If you don't specify the key, a UUID key based on the file content will be used,
|
||||||
|
if you add the `-i` command line parameter, the uploaded file will be served with
|
||||||
|
`Content-Disposition: inline` HTTP header so that browser will attempt to render
|
||||||
|
it in the page instead of triggering a file download (in this way you can create a temporary web page).
|
||||||
|
|
||||||
|
The client will try to detect the file mime type upon upload but if you want to be sure you can specify
|
||||||
|
it manually with the `-t` parameter.
|
||||||
|
|
||||||
|
### Benchmark command
|
||||||
|
|
||||||
|
```bash
|
||||||
|
java -jar rbcs-cli.jar client -p $CLIENT_PROFILE_NAME benchamrk -s 4096 -e 10000
|
||||||
|
```
|
||||||
|
This will insert 10000 randomly generates entries of 4096 bytes into RBCS, then retrieve them
|
||||||
|
and check that the retrieved value matches what was inserted.
|
||||||
|
It will also print throughput stats on the way.
|
||||||
|
|
||||||
## Logging
|
## Logging
|
||||||
|
|
||||||
RBCS uses [logback](https://logback.qos.ch/) and ships with a [default logging configuration](./conf/logback.xml) that
|
RBCS uses [logback](https://logback.qos.ch/) and ships with a [default logging configuration](./conf/logback.xml) that
|
||||||
@@ -270,6 +352,10 @@ can be overridden with `-Dlogback.configurationFile=path/to/custom/configuration
|
|||||||
[Logback documentation](https://logback.qos.ch/manual/configuration.html) for more details about
|
[Logback documentation](https://logback.qos.ch/manual/configuration.html) for more details about
|
||||||
how to configure Logback
|
how to configure Logback
|
||||||
|
|
||||||
|
## Performance
|
||||||
|
|
||||||
|
You can check performance benchmarks [here](doc/benchmarks.md)
|
||||||
|
|
||||||
## FAQ
|
## FAQ
|
||||||
### Why should I use a build cache?
|
### Why should I use a build cache?
|
||||||
|
|
||||||
|
93
benchmark/rbcs-filesystem.yml
Normal file
93
benchmark/rbcs-filesystem.yml
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: rbcs-server
|
||||||
|
data:
|
||||||
|
rbcs-server.xml: |
|
||||||
|
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||||
|
<rbcs:server xmlns:xs="http://www.w3.org/2001/XMLSchema-instance"
|
||||||
|
xmlns:rbcs="urn:net.woggioni.rbcs.server"
|
||||||
|
xmlns:rbcs-memcache="urn:net.woggioni.rbcs.server.memcache"
|
||||||
|
xs:schemaLocation="urn:net.woggioni.rbcs.server.memcache jpms://net.woggioni.rbcs.server.memcache/net/woggioni/rbcs/server/memcache/schema/rbcs-memcache.xsd urn:net.woggioni.rbcs.server jpms://net.woggioni.rbcs.server/net/woggioni/rbcs/server/schema/rbcs-server.xsd"
|
||||||
|
>
|
||||||
|
<bind host="0.0.0.0" port="8080" incoming-connections-backlog-size="128"/>
|
||||||
|
<connection
|
||||||
|
max-request-size="0xd000000"
|
||||||
|
idle-timeout="PT15S"
|
||||||
|
read-idle-timeout="PT30S"
|
||||||
|
write-idle-timeout="PT30S"/>
|
||||||
|
<event-executor use-virtual-threads="true"/>
|
||||||
|
<cache xs:type="rbcs:fileSystemCacheType" max-age="P7D" enable-compression="false" path="/rbcs/cache"/>
|
||||||
|
</rbcs:server>
|
||||||
|
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
metadata:
|
||||||
|
name: rbcs-pvc
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
storageClassName: local-path
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 16Gi
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: rbcs-deployment
|
||||||
|
labels:
|
||||||
|
app: rbcs
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: rbcs
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: rbcs
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: rbcs
|
||||||
|
image: gitea.woggioni.net/woggioni/rbcs:native
|
||||||
|
imagePullPolicy: Always
|
||||||
|
args: ['server', '-c', 'rbcs-server.xml']
|
||||||
|
ports:
|
||||||
|
- containerPort: 8080
|
||||||
|
volumeMounts:
|
||||||
|
- name: config-volume
|
||||||
|
mountPath: /rbcs/rbcs-server.xml
|
||||||
|
subPath: rbcs-server.xml
|
||||||
|
- name: cache-volume
|
||||||
|
mountPath: /rbcs/cache
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
memory: "0.25Gi"
|
||||||
|
cpu: "1"
|
||||||
|
limits:
|
||||||
|
memory: "0.25Gi"
|
||||||
|
cpu: "3.5"
|
||||||
|
volumes:
|
||||||
|
- name: config-volume
|
||||||
|
configMap:
|
||||||
|
name: rbcs-server
|
||||||
|
- name: cache-volume
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: rbcs-pvc
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: rbcs-service
|
||||||
|
spec:
|
||||||
|
type: LoadBalancer
|
||||||
|
ports:
|
||||||
|
- port: 8080
|
||||||
|
targetPort: 8080
|
||||||
|
protocol: TCP
|
||||||
|
selector:
|
||||||
|
app: rbcs
|
||||||
|
|
76
benchmark/rbcs-in-memory.yml
Normal file
76
benchmark/rbcs-in-memory.yml
Normal file
@@ -0,0 +1,76 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: rbcs-server
|
||||||
|
data:
|
||||||
|
rbcs-server.xml: |
|
||||||
|
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||||
|
<rbcs:server xmlns:xs="http://www.w3.org/2001/XMLSchema-instance"
|
||||||
|
xmlns:rbcs="urn:net.woggioni.rbcs.server"
|
||||||
|
xmlns:rbcs-memcache="urn:net.woggioni.rbcs.server.memcache"
|
||||||
|
xs:schemaLocation="urn:net.woggioni.rbcs.server.memcache jpms://net.woggioni.rbcs.server.memcache/net/woggioni/rbcs/server/memcache/schema/rbcs-memcache.xsd urn:net.woggioni.rbcs.server jpms://net.woggioni.rbcs.server/net/woggioni/rbcs/server/schema/rbcs-server.xsd"
|
||||||
|
>
|
||||||
|
<bind host="0.0.0.0" port="8080" incoming-connections-backlog-size="128"/>
|
||||||
|
<connection
|
||||||
|
max-request-size="0xd000000"
|
||||||
|
idle-timeout="PT15S"
|
||||||
|
read-idle-timeout="PT30S"
|
||||||
|
write-idle-timeout="PT30S"/>
|
||||||
|
<event-executor use-virtual-threads="true"/>
|
||||||
|
<cache xs:type="rbcs:inMemoryCacheType" max-age="P7D" enable-compression="false" max-size="0xb0000000" />
|
||||||
|
</rbcs:server>
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: rbcs-deployment
|
||||||
|
labels:
|
||||||
|
app: rbcs
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: rbcs
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: rbcs
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: rbcs
|
||||||
|
image: gitea.woggioni.net/woggioni/rbcs:native
|
||||||
|
imagePullPolicy: Always
|
||||||
|
args: ['server', '-c', 'rbcs-server.xml']
|
||||||
|
ports:
|
||||||
|
- containerPort: 8080
|
||||||
|
volumeMounts:
|
||||||
|
- name: config-volume
|
||||||
|
mountPath: /rbcs/rbcs-server.xml
|
||||||
|
subPath: rbcs-server.xml
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
memory: "0.5Gi"
|
||||||
|
cpu: "1"
|
||||||
|
limits:
|
||||||
|
memory: "4Gi"
|
||||||
|
cpu: "3.5"
|
||||||
|
volumes:
|
||||||
|
- name: config-volume
|
||||||
|
configMap:
|
||||||
|
name: rbcs-server
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: rbcs-service
|
||||||
|
spec:
|
||||||
|
type: LoadBalancer
|
||||||
|
ports:
|
||||||
|
- port: 8080
|
||||||
|
targetPort: 8080
|
||||||
|
protocol: TCP
|
||||||
|
selector:
|
||||||
|
app: rbcs
|
||||||
|
|
117
benchmark/rbcs-memcache.yml
Normal file
117
benchmark/rbcs-memcache.yml
Normal file
@@ -0,0 +1,117 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: rbcs-server
|
||||||
|
data:
|
||||||
|
rbcs-server.xml: |
|
||||||
|
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||||
|
<rbcs:server xmlns:xs="http://www.w3.org/2001/XMLSchema-instance"
|
||||||
|
xmlns:rbcs="urn:net.woggioni.rbcs.server"
|
||||||
|
xmlns:rbcs-memcache="urn:net.woggioni.rbcs.server.memcache"
|
||||||
|
xs:schemaLocation="urn:net.woggioni.rbcs.server.memcache jpms://net.woggioni.rbcs.server.memcache/net/woggioni/rbcs/server/memcache/schema/rbcs-memcache.xsd urn:net.woggioni.rbcs.server jpms://net.woggioni.rbcs.server/net/woggioni/rbcs/server/schema/rbcs-server.xsd"
|
||||||
|
>
|
||||||
|
<bind host="0.0.0.0" port="8080" incoming-connections-backlog-size="128"/>
|
||||||
|
<connection
|
||||||
|
max-request-size="0xd000000"
|
||||||
|
idle-timeout="PT15S"
|
||||||
|
read-idle-timeout="PT30S"
|
||||||
|
write-idle-timeout="PT30S"/>
|
||||||
|
<event-executor use-virtual-threads="true"/>
|
||||||
|
<!--cache xs:type="rbcs:inMemoryCacheType" max-age="P7D" enable-compression="false" max-size="0x10000000" /-->
|
||||||
|
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" chunk-size="0x1000" digest="MD5">
|
||||||
|
<server host="memcached-service" port="11211" max-connections="256"/>
|
||||||
|
</cache>
|
||||||
|
</rbcs:server>
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: rbcs-deployment
|
||||||
|
labels:
|
||||||
|
app: rbcs
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: rbcs
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: rbcs
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: rbcs
|
||||||
|
image: gitea.woggioni.net/woggioni/rbcs:native
|
||||||
|
imagePullPolicy: Always
|
||||||
|
args: ['server', '-c', 'rbcs-server.xml']
|
||||||
|
ports:
|
||||||
|
- containerPort: 8080
|
||||||
|
volumeMounts:
|
||||||
|
- name: config-volume
|
||||||
|
mountPath: /rbcs/rbcs-server.xml
|
||||||
|
subPath: rbcs-server.xml
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
memory: "0.25Gi"
|
||||||
|
cpu: "1"
|
||||||
|
limits:
|
||||||
|
memory: "0.25Gi"
|
||||||
|
cpu: "1"
|
||||||
|
volumes:
|
||||||
|
- name: config-volume
|
||||||
|
configMap:
|
||||||
|
name: rbcs-server
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: rbcs-service
|
||||||
|
spec:
|
||||||
|
type: LoadBalancer
|
||||||
|
ports:
|
||||||
|
- port: 8080
|
||||||
|
targetPort: 8080
|
||||||
|
protocol: TCP
|
||||||
|
selector:
|
||||||
|
app: rbcs
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: memcached-deployment
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: memcached
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: memcached
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: memcached
|
||||||
|
image: memcached
|
||||||
|
args: ["-I", "128m", "-m", "4096"]
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
memory: "1Gi"
|
||||||
|
cpu: "500m" # 0.5 CPU
|
||||||
|
limits:
|
||||||
|
memory: "5Gi"
|
||||||
|
cpu: "500m" # 0.5 CP
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: memcached-service
|
||||||
|
spec:
|
||||||
|
type: ClusterIP # ClusterIP makes it accessible only within the cluster
|
||||||
|
ports:
|
||||||
|
- port: 11211 # Default memcached port
|
||||||
|
targetPort: 11211
|
||||||
|
protocol: TCP
|
||||||
|
selector:
|
||||||
|
app: memcached
|
@@ -38,8 +38,7 @@ allprojects { subproject ->
|
|||||||
withSourcesJar()
|
withSourcesJar()
|
||||||
modularity.inferModulePath = true
|
modularity.inferModulePath = true
|
||||||
toolchain {
|
toolchain {
|
||||||
languageVersion = JavaLanguageVersion.of(23)
|
languageVersion = JavaLanguageVersion.of(21)
|
||||||
vendor = JvmVendorSpec.ORACLE
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
87
doc/benchmarks.md
Normal file
87
doc/benchmarks.md
Normal file
@@ -0,0 +1,87 @@
|
|||||||
|
# RBCS performance benchmarks
|
||||||
|
|
||||||
|
All test were executed under the following conditions:
|
||||||
|
- CPU: Intel Celeron J3455 (4 physical cores)
|
||||||
|
- memory: 8GB DDR3L 1600 MHz
|
||||||
|
- disk: SATA3 120GB SSD
|
||||||
|
- HTTP compression: disabled
|
||||||
|
- cache compression: disabled
|
||||||
|
- digest: none
|
||||||
|
- authentication: disabled
|
||||||
|
- TLS: disabled
|
||||||
|
- network RTT: 14ms
|
||||||
|
- network bandwidth: 112 MiB/s
|
||||||
|
### In memory cache backend
|
||||||
|
|
||||||
|
|
||||||
|
| Cache backend | CPU | CPU quota | Memory quota (GB) | Request size (b) | Client connections | PUT (req/s) | GET (req/s) |
|
||||||
|
|----------------|---------------------|-----------|-------------------|------------------|--------------------|-------------|-------------|
|
||||||
|
| in-memory | Intel Celeron J3455 | 1.00 | 4 | 128 | 10 | 3691 | 4037 |
|
||||||
|
| in-memory | Intel Celeron J3455 | 1.00 | 4 | 128 | 100 | 6881 | 7483 |
|
||||||
|
| in-memory | Intel Celeron J3455 | 1.00 | 4 | 512 | 10 | 3790 | 4069 |
|
||||||
|
| in-memory | Intel Celeron J3455 | 1.00 | 4 | 512 | 100 | 6716 | 7408 |
|
||||||
|
| in-memory | Intel Celeron J3455 | 1.00 | 4 | 4096 | 10 | 3399 | 1974 |
|
||||||
|
| in-memory | Intel Celeron J3455 | 1.00 | 4 | 4096 | 100 | 5341 | 6402 |
|
||||||
|
| in-memory | Intel Celeron J3455 | 1.00 | 4 | 65536 | 10 | 1099 | 1116 |
|
||||||
|
| in-memory | Intel Celeron J3455 | 1.00 | 4 | 65536 | 100 | 1379 | 1703 |
|
||||||
|
| in-memory | Intel Celeron J3455 | 3.50 | 4 | 128 | 10 | 4443 | 5170 |
|
||||||
|
| in-memory | Intel Celeron J3455 | 3.50 | 4 | 128 | 100 | 12813 | 13568 |
|
||||||
|
| in-memory | Intel Celeron J3455 | 3.50 | 4 | 512 | 10 | 4450 | 4383 |
|
||||||
|
| in-memory | Intel Celeron J3455 | 3.50 | 4 | 512 | 100 | 12212 | 13586 |
|
||||||
|
| in-memory | Intel Celeron J3455 | 3.50 | 4 | 4096 | 10 | 3441 | 3012 |
|
||||||
|
| in-memory | Intel Celeron J3455 | 3.50 | 4 | 4096 | 100 | 8982 | 10452 |
|
||||||
|
| in-memory | Intel Celeron J3455 | 3.50 | 4 | 65536 | 10 | 1391 | 1167 |
|
||||||
|
| in-memory | Intel Celeron J3455 | 3.50 | 4 | 65536 | 100 | 1303 | 1151 |
|
||||||
|
|
||||||
|
### Filesystem cache backend
|
||||||
|
|
||||||
|
compression: disabled
|
||||||
|
digest: none
|
||||||
|
authentication: disabled
|
||||||
|
TLS: disabled
|
||||||
|
|
||||||
|
| Cache backend | CPU | CPU quota | Memory quota (GB) | Request size (b) | Client connections | PUT (req/s) | GET (req/s) |
|
||||||
|
|---------------|---------------------|-----------|-------------------|------------------|--------------------|-------------|-------------|
|
||||||
|
| filesystem | Intel Celeron J3455 | 1.00 | 0.25 | 128 | 10 | 1208 | 2048 |
|
||||||
|
| filesystem | Intel Celeron J3455 | 1.00 | 0.25 | 128 | 100 | 1304 | 2394 |
|
||||||
|
| filesystem | Intel Celeron J3455 | 1.00 | 0.25 | 512 | 10 | 1408 | 2157 |
|
||||||
|
| filesystem | Intel Celeron J3455 | 1.00 | 0.25 | 512 | 100 | 1282 | 1888 |
|
||||||
|
| filesystem | Intel Celeron J3455 | 1.00 | 0.25 | 4096 | 10 | 1291 | 1256 |
|
||||||
|
| filesystem | Intel Celeron J3455 | 1.00 | 0.25 | 4096 | 100 | 1170 | 1423 |
|
||||||
|
| filesystem | Intel Celeron J3455 | 1.00 | 0.25 | 65536 | 10 | 313 | 606 |
|
||||||
|
| filesystem | Intel Celeron J3455 | 1.00 | 0.25 | 65536 | 100 | 298 | 609 |
|
||||||
|
| filesystem | Intel Celeron J3455 | 3.50 | 0.25 | 128 | 10 | 2195 | 3477 |
|
||||||
|
| filesystem | Intel Celeron J3455 | 3.50 | 0.25 | 128 | 100 | 2480 | 6207 |
|
||||||
|
| filesystem | Intel Celeron J3455 | 3.50 | 0.25 | 512 | 10 | 2164 | 3413 |
|
||||||
|
| filesystem | Intel Celeron J3455 | 3.50 | 0.25 | 512 | 100 | 2842 | 6218 |
|
||||||
|
| filesystem | Intel Celeron J3455 | 3.50 | 0.25 | 4096 | 10 | 1302 | 2591 |
|
||||||
|
| filesystem | Intel Celeron J3455 | 3.50 | 0.25 | 4096 | 100 | 2270 | 3045 |
|
||||||
|
| filesystem | Intel Celeron J3455 | 3.50 | 0.25 | 65536 | 10 | 375 | 394 |
|
||||||
|
| filesystem | Intel Celeron J3455 | 3.50 | 0.25 | 65536 | 100 | 364 | 462 |
|
||||||
|
|
||||||
|
|
||||||
|
### Memcache cache backend
|
||||||
|
|
||||||
|
compression: disabled
|
||||||
|
digest: MD5
|
||||||
|
authentication: disabled
|
||||||
|
TLS: disabled
|
||||||
|
|
||||||
|
| Cache backend | CPU | CPU quota | Memory quota (GB) | Request size (b) | Client connections | PUT (req/s) | GET (req/s) |
|
||||||
|
|---------------|---------------------|-----------|-------------------|------------------|--------------------|-------------|-------------|
|
||||||
|
| memcache | Intel Celeron J3455 | 1.00 | 0.25 | 128 | 10 | 2505 | 2578 |
|
||||||
|
| memcache | Intel Celeron J3455 | 1.00 | 0.25 | 128 | 100 | 3582 | 3935 |
|
||||||
|
| memcache | Intel Celeron J3455 | 1.00 | 0.25 | 512 | 10 | 2495 | 2784 |
|
||||||
|
| memcache | Intel Celeron J3455 | 1.00 | 0.25 | 512 | 100 | 3565 | 3883 |
|
||||||
|
| memcache | Intel Celeron J3455 | 1.00 | 0.25 | 4096 | 10 | 2174 | 2505 |
|
||||||
|
| memcache | Intel Celeron J3455 | 1.00 | 0.25 | 4096 | 100 | 2937 | 3563 |
|
||||||
|
| memcache | Intel Celeron J3455 | 1.00 | 0.25 | 65536 | 10 | 648 | 1074 |
|
||||||
|
| memcache | Intel Celeron J3455 | 1.00 | 0.25 | 65536 | 100 | 724 | 1548 |
|
||||||
|
| memcache | Intel Celeron J3455 | 3.50 | 0.25 | 128 | 10 | 2362 | 2927 |
|
||||||
|
| memcache | Intel Celeron J3455 | 3.50 | 0.25 | 128 | 100 | 5491 | 6531 |
|
||||||
|
| memcache | Intel Celeron J3455 | 3.50 | 0.25 | 512 | 10 | 2125 | 2807 |
|
||||||
|
| memcache | Intel Celeron J3455 | 3.50 | 0.25 | 512 | 100 | 5173 | 6242 |
|
||||||
|
| memcache | Intel Celeron J3455 | 3.50 | 0.25 | 4096 | 10 | 1720 | 2397 |
|
||||||
|
| memcache | Intel Celeron J3455 | 3.50 | 0.25 | 4096 | 100 | 3871 | 5859 |
|
||||||
|
| memcache | Intel Celeron J3455 | 3.50 | 0.25 | 65536 | 10 | 616 | 1016 |
|
||||||
|
| memcache | Intel Celeron J3455 | 3.50 | 0.25 | 65536 | 100 | 820 | 1677 |
|
@@ -24,6 +24,7 @@ Configures connection handling parameters.
|
|||||||
- `read-idle-timeout` (optional, default: PT60S): Connection timeout when no reads
|
- `read-idle-timeout` (optional, default: PT60S): Connection timeout when no reads
|
||||||
- `write-idle-timeout` (optional, default: PT60S): Connection timeout when no writes
|
- `write-idle-timeout` (optional, default: PT60S): Connection timeout when no writes
|
||||||
- `max-request-size` (optional, default: 0x4000000): Maximum allowed request body size
|
- `max-request-size` (optional, default: 0x4000000): Maximum allowed request body size
|
||||||
|
- `chunk-size` (default: 0x10000): Maximum socket write size
|
||||||
|
|
||||||
#### `<event-executor>`
|
#### `<event-executor>`
|
||||||
Configures event execution settings.
|
Configures event execution settings.
|
||||||
@@ -44,7 +45,6 @@ A simple storage backend that uses an hash map to store data in memory
|
|||||||
- `digest` (default: MD5): Key hashing algorithm
|
- `digest` (default: MD5): Key hashing algorithm
|
||||||
- `enable-compression` (default: true): Enable deflate compression
|
- `enable-compression` (default: true): Enable deflate compression
|
||||||
- `compression-level` (default: -1): Compression level (-1 to 9)
|
- `compression-level` (default: -1): Compression level (-1 to 9)
|
||||||
- `chunk-size` (default: 0x10000): Maximum socket write size
|
|
||||||
|
|
||||||
##### FileSystem Cache
|
##### FileSystem Cache
|
||||||
|
|
||||||
@@ -56,7 +56,6 @@ A storage backend that stores data in a folder on the disk
|
|||||||
- `digest` (default: MD5): Key hashing algorithm
|
- `digest` (default: MD5): Key hashing algorithm
|
||||||
- `enable-compression` (default: true): Enable deflate compression
|
- `enable-compression` (default: true): Enable deflate compression
|
||||||
- `compression-level` (default: -1): Compression level
|
- `compression-level` (default: -1): Compression level
|
||||||
- `chunk-size` (default: 0x10000): Maximum in-memory cache value size
|
|
||||||
|
|
||||||
#### `<authorization>`
|
#### `<authorization>`
|
||||||
Configures user and group-based access control.
|
Configures user and group-based access control.
|
||||||
@@ -134,12 +133,24 @@ Configures TLS encryption.
|
|||||||
idle-timeout="PT10S"
|
idle-timeout="PT10S"
|
||||||
read-idle-timeout="PT20S"
|
read-idle-timeout="PT20S"
|
||||||
write-idle-timeout="PT20S"
|
write-idle-timeout="PT20S"
|
||||||
read-timeout="PT5S"
|
chunk-size="0x1000"/>
|
||||||
write-timeout="PT5S"/>
|
|
||||||
<event-executor use-virtual-threads="true"/>
|
<event-executor use-virtual-threads="true"/>
|
||||||
|
|
||||||
<cache xs:type="rbcs:inMemoryCacheType" max-age="P7D" enable-compression="false" max-size="0x10000000" />
|
<cache xs:type="rbcs:inMemoryCacheType" max-age="P7D" enable-compression="false" max-size="0x10000000" />
|
||||||
<!--cache xs:type="rbcs:fileSystemCacheType" max-age="P7D" enable-compression="false" path="${sys:java.io.tmpdir}/rbcs"/-->
|
|
||||||
<authorization>
|
<!-- uncomment this to enable the filesystem storage backend, sotring cache data in "${sys:java.io.tmpdir}/rbcs"
|
||||||
|
<cache xs:type="rbcs:fileSystemCacheType" max-age="P7D" enable-compression="false" path="${sys:java.io.tmpdir}/rbcs"/>
|
||||||
|
-->
|
||||||
|
|
||||||
|
<!-- uncomment this to use memcache as the storage backend, also make sure you have
|
||||||
|
the memcache plugin installed in the `plugins` directory if you are using running
|
||||||
|
the jar version of RBCS
|
||||||
|
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" digest="MD5">
|
||||||
|
<server host="127.0.0.1" port="11211" max-connections="256"/>
|
||||||
|
</cache>
|
||||||
|
-->
|
||||||
|
|
||||||
|
<authorization>
|
||||||
<users>
|
<users>
|
||||||
<user name="user1" password="II+qeNLft2pZ/JVNo9F7jpjM/BqEcfsJW27NZ6dPVs8tAwHbxrJppKYsbL7J/SMl">
|
<user name="user1" password="II+qeNLft2pZ/JVNo9F7jpjM/BqEcfsJW27NZ6dPVs8tAwHbxrJppKYsbL7J/SMl">
|
||||||
<quota calls="100" period="PT1S"/>
|
<quota calls="100" period="PT1S"/>
|
||||||
|
@@ -4,7 +4,7 @@ org.gradle.caching=true
|
|||||||
|
|
||||||
rbcs.version = 0.2.0
|
rbcs.version = 0.2.0
|
||||||
|
|
||||||
lys.version = 2025.02.26
|
lys.version = 2025.03.08
|
||||||
|
|
||||||
gitea.maven.url = https://gitea.woggioni.net/api/packages/woggioni/maven
|
gitea.maven.url = https://gitea.woggioni.net/api/packages/woggioni/maven
|
||||||
docker.registry.url=gitea.woggioni.net
|
docker.registry.url=gitea.woggioni.net
|
||||||
|
@@ -5,9 +5,12 @@ plugins {
|
|||||||
}
|
}
|
||||||
|
|
||||||
dependencies {
|
dependencies {
|
||||||
|
implementation catalog.slf4j.api
|
||||||
|
implementation project(':rbcs-common')
|
||||||
api catalog.netty.common
|
api catalog.netty.common
|
||||||
api catalog.netty.buffer
|
api catalog.netty.buffer
|
||||||
api catalog.netty.handler
|
api catalog.netty.handler
|
||||||
|
api catalog.netty.codec.http
|
||||||
}
|
}
|
||||||
|
|
||||||
publishing {
|
publishing {
|
||||||
|
@@ -1,10 +1,15 @@
|
|||||||
module net.woggioni.rbcs.api {
|
module net.woggioni.rbcs.api {
|
||||||
requires static lombok;
|
requires static lombok;
|
||||||
requires java.xml;
|
|
||||||
requires io.netty.buffer;
|
|
||||||
requires io.netty.handler;
|
requires io.netty.handler;
|
||||||
requires io.netty.transport;
|
|
||||||
requires io.netty.common;
|
requires io.netty.common;
|
||||||
|
requires net.woggioni.rbcs.common;
|
||||||
|
requires io.netty.transport;
|
||||||
|
requires io.netty.codec.http;
|
||||||
|
requires io.netty.buffer;
|
||||||
|
requires org.slf4j;
|
||||||
|
requires java.xml;
|
||||||
|
|
||||||
|
|
||||||
exports net.woggioni.rbcs.api;
|
exports net.woggioni.rbcs.api;
|
||||||
exports net.woggioni.rbcs.api.exception;
|
exports net.woggioni.rbcs.api.exception;
|
||||||
exports net.woggioni.rbcs.api.message;
|
exports net.woggioni.rbcs.api.message;
|
||||||
|
@@ -0,0 +1,57 @@
|
|||||||
|
package net.woggioni.rbcs.api;
|
||||||
|
|
||||||
|
import io.netty.channel.ChannelHandlerContext;
|
||||||
|
import io.netty.channel.ChannelInboundHandlerAdapter;
|
||||||
|
import io.netty.handler.codec.http.LastHttpContent;
|
||||||
|
import io.netty.util.ReferenceCounted;
|
||||||
|
import lombok.extern.slf4j.Slf4j;
|
||||||
|
import net.woggioni.rbcs.api.message.CacheMessage;
|
||||||
|
|
||||||
|
@Slf4j
|
||||||
|
public abstract class CacheHandler extends ChannelInboundHandlerAdapter {
|
||||||
|
private boolean requestFinished = false;
|
||||||
|
|
||||||
|
abstract protected void channelRead0(ChannelHandlerContext ctx, CacheMessage msg);
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void channelRead(ChannelHandlerContext ctx, Object msg) {
|
||||||
|
if(!requestFinished && msg instanceof CacheMessage) {
|
||||||
|
if(msg instanceof CacheMessage.LastCacheContent) requestFinished = true;
|
||||||
|
try {
|
||||||
|
channelRead0(ctx, (CacheMessage) msg);
|
||||||
|
} finally {
|
||||||
|
if(msg instanceof ReferenceCounted rc) rc.release();
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
ctx.fireChannelRead(msg);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
protected void sendMessageAndFlush(ChannelHandlerContext ctx, Object msg) {
|
||||||
|
sendMessage(ctx, msg, true);
|
||||||
|
}
|
||||||
|
|
||||||
|
protected void sendMessage(ChannelHandlerContext ctx, Object msg) {
|
||||||
|
sendMessage(ctx, msg, false);
|
||||||
|
}
|
||||||
|
|
||||||
|
private void sendMessage(ChannelHandlerContext ctx, Object msg, boolean flush) {
|
||||||
|
ctx.write(msg);
|
||||||
|
if(
|
||||||
|
msg instanceof CacheMessage.LastCacheContent ||
|
||||||
|
msg instanceof CacheMessage.CachePutResponse ||
|
||||||
|
msg instanceof CacheMessage.CacheValueNotFoundResponse ||
|
||||||
|
msg instanceof LastHttpContent
|
||||||
|
) {
|
||||||
|
ctx.flush();
|
||||||
|
ctx.pipeline().remove(this);
|
||||||
|
} else if(flush) {
|
||||||
|
ctx.flush();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) throws Exception {
|
||||||
|
super.exceptionCaught(ctx, cause);
|
||||||
|
}
|
||||||
|
}
|
@@ -1,13 +1,13 @@
|
|||||||
package net.woggioni.rbcs.api;
|
package net.woggioni.rbcs.api;
|
||||||
|
|
||||||
import io.netty.channel.ChannelFactory;
|
import io.netty.channel.ChannelFactory;
|
||||||
import io.netty.channel.ChannelHandler;
|
|
||||||
import io.netty.channel.EventLoopGroup;
|
import io.netty.channel.EventLoopGroup;
|
||||||
import io.netty.channel.socket.DatagramChannel;
|
import io.netty.channel.socket.DatagramChannel;
|
||||||
import io.netty.channel.socket.SocketChannel;
|
import io.netty.channel.socket.SocketChannel;
|
||||||
|
|
||||||
public interface CacheHandlerFactory extends AsyncCloseable {
|
public interface CacheHandlerFactory extends AsyncCloseable {
|
||||||
ChannelHandler newHandler(
|
CacheHandler newHandler(
|
||||||
|
Configuration configuration,
|
||||||
EventLoopGroup eventLoopGroup,
|
EventLoopGroup eventLoopGroup,
|
||||||
ChannelFactory<SocketChannel> socketChannelFactory,
|
ChannelFactory<SocketChannel> socketChannelFactory,
|
||||||
ChannelFactory<DatagramChannel> datagramChannelFactory
|
ChannelFactory<DatagramChannel> datagramChannelFactory
|
||||||
|
@@ -1,10 +1,9 @@
|
|||||||
package net.woggioni.rbcs.api;
|
package net.woggioni.rbcs.api;
|
||||||
|
|
||||||
|
import java.io.Serializable;
|
||||||
import lombok.Getter;
|
import lombok.Getter;
|
||||||
import lombok.RequiredArgsConstructor;
|
import lombok.RequiredArgsConstructor;
|
||||||
|
|
||||||
import java.io.Serializable;
|
|
||||||
|
|
||||||
@Getter
|
@Getter
|
||||||
@RequiredArgsConstructor
|
@RequiredArgsConstructor
|
||||||
public class CacheValueMetadata implements Serializable {
|
public class CacheValueMetadata implements Serializable {
|
||||||
|
@@ -1,16 +1,15 @@
|
|||||||
package net.woggioni.rbcs.api;
|
package net.woggioni.rbcs.api;
|
||||||
|
|
||||||
|
|
||||||
import lombok.EqualsAndHashCode;
|
|
||||||
import lombok.NonNull;
|
|
||||||
import lombok.Value;
|
|
||||||
|
|
||||||
import java.nio.file.Path;
|
import java.nio.file.Path;
|
||||||
import java.security.cert.X509Certificate;
|
import java.security.cert.X509Certificate;
|
||||||
import java.time.Duration;
|
import java.time.Duration;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
import java.util.Set;
|
import java.util.Set;
|
||||||
import java.util.stream.Collectors;
|
import java.util.stream.Collectors;
|
||||||
|
import lombok.EqualsAndHashCode;
|
||||||
|
import lombok.NonNull;
|
||||||
|
import lombok.Value;
|
||||||
|
|
||||||
@Value
|
@Value
|
||||||
public class Configuration {
|
public class Configuration {
|
||||||
@@ -39,6 +38,7 @@ public class Configuration {
|
|||||||
Duration readIdleTimeout;
|
Duration readIdleTimeout;
|
||||||
Duration writeIdleTimeout;
|
Duration writeIdleTimeout;
|
||||||
int maxRequestSize;
|
int maxRequestSize;
|
||||||
|
int chunkSize;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Value
|
@Value
|
||||||
|
@@ -12,6 +12,7 @@ plugins {
|
|||||||
import net.woggioni.gradle.envelope.EnvelopePlugin
|
import net.woggioni.gradle.envelope.EnvelopePlugin
|
||||||
import net.woggioni.gradle.envelope.EnvelopeJarTask
|
import net.woggioni.gradle.envelope.EnvelopeJarTask
|
||||||
import net.woggioni.gradle.graalvm.NativeImageConfigurationTask
|
import net.woggioni.gradle.graalvm.NativeImageConfigurationTask
|
||||||
|
import net.woggioni.gradle.graalvm.NativeImageTask
|
||||||
import net.woggioni.gradle.graalvm.NativeImagePlugin
|
import net.woggioni.gradle.graalvm.NativeImagePlugin
|
||||||
import net.woggioni.gradle.graalvm.UpxTask
|
import net.woggioni.gradle.graalvm.UpxTask
|
||||||
import net.woggioni.gradle.graalvm.JlinkPlugin
|
import net.woggioni.gradle.graalvm.JlinkPlugin
|
||||||
@@ -90,11 +91,10 @@ Provider<EnvelopeJarTask> envelopeJarTaskProvider = tasks.named(EnvelopePlugin.E
|
|||||||
}
|
}
|
||||||
|
|
||||||
tasks.named(NativeImagePlugin.CONFIGURE_NATIVE_IMAGE_TASK_NAME, NativeImageConfigurationTask) {
|
tasks.named(NativeImagePlugin.CONFIGURE_NATIVE_IMAGE_TASK_NAME, NativeImageConfigurationTask) {
|
||||||
javaLauncher = javaToolchains.launcherFor {
|
toolchain {
|
||||||
languageVersion = JavaLanguageVersion.of(21)
|
languageVersion = JavaLanguageVersion.of(21)
|
||||||
vendor = JvmVendorSpec.ORACLE
|
vendor = JvmVendorSpec.GRAAL_VM
|
||||||
}
|
}
|
||||||
|
|
||||||
mainClass = "net.woggioni.rbcs.cli.graal.GraalNativeImageConfiguration"
|
mainClass = "net.woggioni.rbcs.cli.graal.GraalNativeImageConfiguration"
|
||||||
classpath = project.files(
|
classpath = project.files(
|
||||||
configurations.configureNativeImageRuntimeClasspath,
|
configurations.configureNativeImageRuntimeClasspath,
|
||||||
@@ -105,9 +105,14 @@ tasks.named(NativeImagePlugin.CONFIGURE_NATIVE_IMAGE_TASK_NAME, NativeImageConfi
|
|||||||
systemProperty('io.netty.leakDetectionLevel', 'DISABLED')
|
systemProperty('io.netty.leakDetectionLevel', 'DISABLED')
|
||||||
modularity.inferModulePath = false
|
modularity.inferModulePath = false
|
||||||
enabled = true
|
enabled = true
|
||||||
|
systemProperty('gradle.tmp.dir', temporaryDir.toString())
|
||||||
}
|
}
|
||||||
|
|
||||||
nativeImage {
|
nativeImage {
|
||||||
|
toolchain {
|
||||||
|
languageVersion = JavaLanguageVersion.of(23)
|
||||||
|
vendor = JvmVendorSpec.GRAAL_VM
|
||||||
|
}
|
||||||
mainClass = mainClassName
|
mainClass = mainClassName
|
||||||
// mainModule = mainModuleName
|
// mainModule = mainModuleName
|
||||||
useMusl = true
|
useMusl = true
|
||||||
|
53
rbcs-cli/conf/rbcs-server.xml
Normal file
53
rbcs-cli/conf/rbcs-server.xml
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
|
||||||
|
<rbcs:server xmlns:xs="http://www.w3.org/2001/XMLSchema-instance"
|
||||||
|
xmlns:rbcs="urn:net.woggioni.rbcs.server"
|
||||||
|
xmlns:rbcs-memcache="urn:net.woggioni.rbcs.server.memcache"
|
||||||
|
xs:schemaLocation="urn:net.woggioni.rbcs.server.memcache jpms://net.woggioni.rbcs.server.memcache/net/woggioni/rbcs/server/memcache/schema/rbcs-memcache.xsd urn:net.woggioni.rbcs.server jpms://net.woggioni.rbcs.server/net/woggioni/rbcs/server/schema/rbcs-server.xsd"
|
||||||
|
>
|
||||||
|
<bind host="127.0.0.1" port="8080" incoming-connections-backlog-size="1024"/>
|
||||||
|
<connection
|
||||||
|
max-request-size="67108864"
|
||||||
|
idle-timeout="PT10S"
|
||||||
|
read-idle-timeout="PT20S"
|
||||||
|
write-idle-timeout="PT20S"/>
|
||||||
|
<event-executor use-virtual-threads="true"/>
|
||||||
|
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" chunk-size="0x1000" digest="MD5">
|
||||||
|
<server host="127.0.0.1" port="11211" max-connections="256"/>
|
||||||
|
</cache>
|
||||||
|
<!--cache xs:type="rbcs:inMemoryCacheType" max-age="P7D" enable-compression="false" max-size="0x10000000" /-->
|
||||||
|
<!--cache xs:type="rbcs:fileSystemCacheType" max-age="P7D" enable-compression="false" /-->
|
||||||
|
<authorization>
|
||||||
|
<users>
|
||||||
|
<user name="woggioni" password="II+qeNLft2pZ/JVNo9F7jpjM/BqEcfsJW27NZ6dPVs8tAwHbxrJppKYsbL7J/SMl">
|
||||||
|
<quota calls="100" period="PT1S"/>
|
||||||
|
</user>
|
||||||
|
<user name="gitea" password="v6T9+q6/VNpvLknji3ixPiyz2YZCQMXj2FN7hvzbfc2Ig+IzAHO0iiBCH9oWuBDq"/>
|
||||||
|
<anonymous>
|
||||||
|
<quota calls="10" period="PT60S" initial-available-calls="10" max-available-calls="10"/>
|
||||||
|
</anonymous>
|
||||||
|
</users>
|
||||||
|
<groups>
|
||||||
|
<group name="readers">
|
||||||
|
<users>
|
||||||
|
<anonymous/>
|
||||||
|
</users>
|
||||||
|
<roles>
|
||||||
|
<reader/>
|
||||||
|
</roles>
|
||||||
|
</group>
|
||||||
|
<group name="writers">
|
||||||
|
<users>
|
||||||
|
<user ref="woggioni"/>
|
||||||
|
<user ref="gitea"/>
|
||||||
|
</users>
|
||||||
|
<roles>
|
||||||
|
<reader/>
|
||||||
|
<writer/>
|
||||||
|
</roles>
|
||||||
|
</group>
|
||||||
|
</groups>
|
||||||
|
</authorization>
|
||||||
|
<authentication>
|
||||||
|
<none/>
|
||||||
|
</authentication>
|
||||||
|
</rbcs:server>
|
@@ -1,2 +1,2 @@
|
|||||||
Args=-O3 --gc=serial --install-exit-handlers --initialize-at-run-time=io.netty --enable-url-protocols=jpms --initialize-at-build-time=net.woggioni.rbcs.common.RbcsUrlStreamHandlerFactory,net.woggioni.rbcs.common.RbcsUrlStreamHandlerFactory$JpmsHandler
|
Args=-O3 -march=x86-64-v2 --gc=serial --install-exit-handlers --initialize-at-run-time=io.netty --enable-url-protocols=jpms --initialize-at-build-time=net.woggioni.rbcs.common.RbcsUrlStreamHandlerFactory,net.woggioni.rbcs.common.RbcsUrlStreamHandlerFactory$JpmsHandler
|
||||||
#-H:TraceClassInitialization=io.netty.handler.ssl.BouncyCastleAlpnSslUtils
|
#-H:TraceClassInitialization=io.netty.handler.ssl.BouncyCastleAlpnSslUtils
|
@@ -487,6 +487,10 @@
|
|||||||
"name":"jdk.internal.misc.Unsafe",
|
"name":"jdk.internal.misc.Unsafe",
|
||||||
"methods":[{"name":"getUnsafe","parameterTypes":[] }]
|
"methods":[{"name":"getUnsafe","parameterTypes":[] }]
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name":"net.woggioni.rbcs.api.CacheHandler",
|
||||||
|
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name":"net.woggioni.rbcs.cli.RemoteBuildCacheServerCli",
|
"name":"net.woggioni.rbcs.cli.RemoteBuildCacheServerCli",
|
||||||
"allDeclaredFields":true,
|
"allDeclaredFields":true,
|
||||||
@@ -552,11 +556,7 @@
|
|||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name":"net.woggioni.rbcs.client.RemoteBuildCacheClient$sendRequest$1$operationComplete$responseHandler$1",
|
"name":"net.woggioni.rbcs.client.RemoteBuildCacheClient$sendRequest$1$operationComplete$responseHandler$1",
|
||||||
"methods":[{"name":"channelInactive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
|
"methods":[{"name":"channelInactive","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }, {"name":"userEventTriggered","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
|
||||||
},
|
|
||||||
{
|
|
||||||
"name":"net.woggioni.rbcs.client.RemoteBuildCacheClient$sendRequest$1$operationComplete$timeoutHandler$1",
|
|
||||||
"methods":[{"name":"userEventTriggered","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
|
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name":"net.woggioni.rbcs.server.RemoteBuildCacheServer$HttpChunkContentCompressor",
|
"name":"net.woggioni.rbcs.server.RemoteBuildCacheServer$HttpChunkContentCompressor",
|
||||||
@@ -588,17 +588,13 @@
|
|||||||
"name":"net.woggioni.rbcs.server.exception.ExceptionHandler",
|
"name":"net.woggioni.rbcs.server.exception.ExceptionHandler",
|
||||||
"methods":[{"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
|
"methods":[{"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
|
||||||
},
|
},
|
||||||
{
|
|
||||||
"name":"net.woggioni.rbcs.server.handler.CacheContentHandler",
|
|
||||||
"methods":[{"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }]
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"name":"net.woggioni.rbcs.server.handler.MaxRequestSizeHandler",
|
"name":"net.woggioni.rbcs.server.handler.MaxRequestSizeHandler",
|
||||||
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
|
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name":"net.woggioni.rbcs.server.handler.ServerHandler",
|
"name":"net.woggioni.rbcs.server.handler.ServerHandler",
|
||||||
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }, {"name":"write","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object","io.netty.channel.ChannelPromise"] }]
|
"methods":[{"name":"channelRead","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object"] }, {"name":"channelReadComplete","parameterTypes":["io.netty.channel.ChannelHandlerContext"] }, {"name":"exceptionCaught","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Throwable"] }, {"name":"write","parameterTypes":["io.netty.channel.ChannelHandlerContext","java.lang.Object","io.netty.channel.ChannelPromise"] }]
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
"name":"net.woggioni.rbcs.server.handler.TraceHandler",
|
"name":"net.woggioni.rbcs.server.handler.TraceHandler",
|
||||||
|
@@ -36,6 +36,8 @@
|
|||||||
"pattern":"\\Qnet/woggioni/rbcs/server/rbcs-default.xml\\E"
|
"pattern":"\\Qnet/woggioni/rbcs/server/rbcs-default.xml\\E"
|
||||||
}, {
|
}, {
|
||||||
"pattern":"\\Qnet/woggioni/rbcs/server/schema/rbcs-server.xsd\\E"
|
"pattern":"\\Qnet/woggioni/rbcs/server/schema/rbcs-server.xsd\\E"
|
||||||
|
}, {
|
||||||
|
"pattern":"\\Q/net/woggioni/rbcs/server/memcache/schema/rbcs-memcache.xsd\\E"
|
||||||
}, {
|
}, {
|
||||||
"pattern":"java.base:\\Qsun/text/resources/LineBreakIteratorData\\E"
|
"pattern":"java.base:\\Qsun/text/resources/LineBreakIteratorData\\E"
|
||||||
}]},
|
}]},
|
||||||
|
@@ -32,8 +32,9 @@ object GraalNativeImageConfiguration {
|
|||||||
@JvmStatic
|
@JvmStatic
|
||||||
fun main(vararg args : String) {
|
fun main(vararg args : String) {
|
||||||
|
|
||||||
val serverDoc = RemoteBuildCacheServer.DEFAULT_CONFIGURATION_URL.openStream().use {
|
val serverURL = URI.create("file:conf/rbcs-server.xml").toURL()
|
||||||
Xml.parseXml(RemoteBuildCacheServer.DEFAULT_CONFIGURATION_URL, it)
|
val serverDoc = serverURL.openStream().use {
|
||||||
|
Xml.parseXml(serverURL, it)
|
||||||
}
|
}
|
||||||
Parser.parse(serverDoc)
|
Parser.parse(serverDoc)
|
||||||
|
|
||||||
@@ -70,7 +71,6 @@ object GraalNativeImageConfiguration {
|
|||||||
compressionLevel = Deflater.DEFAULT_COMPRESSION,
|
compressionLevel = Deflater.DEFAULT_COMPRESSION,
|
||||||
compressionEnabled = false,
|
compressionEnabled = false,
|
||||||
maxSize = 0x1000000,
|
maxSize = 0x1000000,
|
||||||
chunkSize = 0x1000
|
|
||||||
),
|
),
|
||||||
FileSystemCacheConfiguration(
|
FileSystemCacheConfiguration(
|
||||||
Path.of(System.getProperty("java.io.tmpdir")).resolve("rbcs"),
|
Path.of(System.getProperty("java.io.tmpdir")).resolve("rbcs"),
|
||||||
@@ -78,7 +78,6 @@ object GraalNativeImageConfiguration {
|
|||||||
digestAlgorithm = "MD5",
|
digestAlgorithm = "MD5",
|
||||||
compressionLevel = Deflater.DEFAULT_COMPRESSION,
|
compressionLevel = Deflater.DEFAULT_COMPRESSION,
|
||||||
compressionEnabled = false,
|
compressionEnabled = false,
|
||||||
chunkSize = 0x1000
|
|
||||||
),
|
),
|
||||||
MemcacheCacheConfiguration(
|
MemcacheCacheConfiguration(
|
||||||
listOf(MemcacheCacheConfiguration.Server(
|
listOf(MemcacheCacheConfiguration.Server(
|
||||||
@@ -90,7 +89,6 @@ object GraalNativeImageConfiguration {
|
|||||||
"MD5",
|
"MD5",
|
||||||
null,
|
null,
|
||||||
1,
|
1,
|
||||||
0x1000
|
|
||||||
)
|
)
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -106,6 +104,7 @@ object GraalNativeImageConfiguration {
|
|||||||
Duration.ofSeconds(15),
|
Duration.ofSeconds(15),
|
||||||
Duration.ofSeconds(15),
|
Duration.ofSeconds(15),
|
||||||
0x10000,
|
0x10000,
|
||||||
|
0x1000
|
||||||
),
|
),
|
||||||
users.asSequence().map { it.name to it }.toMap(),
|
users.asSequence().map { it.name to it }.toMap(),
|
||||||
sequenceOf(writersGroup, readersGroup).map { it.name to it }.toMap(),
|
sequenceOf(writersGroup, readersGroup).map { it.name to it }.toMap(),
|
||||||
@@ -126,7 +125,6 @@ object GraalNativeImageConfiguration {
|
|||||||
"MD5",
|
"MD5",
|
||||||
null,
|
null,
|
||||||
1,
|
1,
|
||||||
0x1000
|
|
||||||
)
|
)
|
||||||
|
|
||||||
val serverHandle = RemoteBuildCacheServer(serverConfiguration).run()
|
val serverHandle = RemoteBuildCacheServer(serverConfiguration).run()
|
||||||
@@ -134,7 +132,12 @@ object GraalNativeImageConfiguration {
|
|||||||
|
|
||||||
val clientProfile = ClientConfiguration.Profile(
|
val clientProfile = ClientConfiguration.Profile(
|
||||||
URI.create("http://127.0.0.1:$serverPort/"),
|
URI.create("http://127.0.0.1:$serverPort/"),
|
||||||
null,
|
ClientConfiguration.Connection(
|
||||||
|
Duration.ofSeconds(5),
|
||||||
|
Duration.ofSeconds(5),
|
||||||
|
Duration.ofSeconds(7),
|
||||||
|
true,
|
||||||
|
),
|
||||||
ClientConfiguration.Authentication.BasicAuthenticationCredentials("user3", PASSWORD),
|
ClientConfiguration.Authentication.BasicAuthenticationCredentials("user3", PASSWORD),
|
||||||
Duration.ofSeconds(3),
|
Duration.ofSeconds(3),
|
||||||
10,
|
10,
|
||||||
@@ -176,6 +179,8 @@ object GraalNativeImageConfiguration {
|
|||||||
} catch (ee : ExecutionException) {
|
} catch (ee : ExecutionException) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
RemoteBuildCacheServerCli.main("--help")
|
System.setProperty("net.woggioni.rbcs.conf.dir", System.getProperty("gradle.tmp.dir"))
|
||||||
|
RemoteBuildCacheServerCli.createCommandLine().execute("--version")
|
||||||
|
RemoteBuildCacheServerCli.createCommandLine().execute("server", "-t", "PT10S")
|
||||||
}
|
}
|
||||||
}
|
}
|
@@ -26,8 +26,8 @@ class RemoteBuildCacheServerCli : RbcsCommand() {
|
|||||||
private fun setPropertyIfNotPresent(key: String, value: String) {
|
private fun setPropertyIfNotPresent(key: String, value: String) {
|
||||||
System.getProperty(key) ?: System.setProperty(key, value)
|
System.getProperty(key) ?: System.setProperty(key, value)
|
||||||
}
|
}
|
||||||
@JvmStatic
|
|
||||||
fun main(vararg args: String) {
|
fun createCommandLine() : CommandLine {
|
||||||
setPropertyIfNotPresent("logback.configurationFile", "net/woggioni/rbcs/cli/logback.xml")
|
setPropertyIfNotPresent("logback.configurationFile", "net/woggioni/rbcs/cli/logback.xml")
|
||||||
setPropertyIfNotPresent("io.netty.leakDetectionLevel", "DISABLED")
|
setPropertyIfNotPresent("io.netty.leakDetectionLevel", "DISABLED")
|
||||||
val currentClassLoader = RemoteBuildCacheServerCli::class.java.classLoader
|
val currentClassLoader = RemoteBuildCacheServerCli::class.java.classLoader
|
||||||
@@ -56,7 +56,12 @@ class RemoteBuildCacheServerCli : RbcsCommand() {
|
|||||||
addSubcommand(GetCommand())
|
addSubcommand(GetCommand())
|
||||||
addSubcommand(HealthCheckCommand())
|
addSubcommand(HealthCheckCommand())
|
||||||
})
|
})
|
||||||
System.exit(commandLine.execute(*args))
|
return commandLine
|
||||||
|
}
|
||||||
|
|
||||||
|
@JvmStatic
|
||||||
|
fun main(vararg args: String) {
|
||||||
|
System.exit(createCommandLine().execute(*args))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -6,7 +6,6 @@ import net.woggioni.rbcs.client.Configuration
|
|||||||
import net.woggioni.rbcs.common.createLogger
|
import net.woggioni.rbcs.common.createLogger
|
||||||
import net.woggioni.rbcs.common.debug
|
import net.woggioni.rbcs.common.debug
|
||||||
import picocli.CommandLine
|
import picocli.CommandLine
|
||||||
import java.lang.IllegalArgumentException
|
|
||||||
import java.nio.file.Path
|
import java.nio.file.Path
|
||||||
|
|
||||||
@CommandLine.Command(
|
@CommandLine.Command(
|
||||||
|
@@ -38,11 +38,12 @@ data class Configuration(
|
|||||||
val readIdleTimeout: Duration,
|
val readIdleTimeout: Duration,
|
||||||
val writeIdleTimeout: Duration,
|
val writeIdleTimeout: Duration,
|
||||||
val idleTimeout: Duration,
|
val idleTimeout: Duration,
|
||||||
|
val requestPipelining : Boolean,
|
||||||
)
|
)
|
||||||
|
|
||||||
data class Profile(
|
data class Profile(
|
||||||
val serverURI: URI,
|
val serverURI: URI,
|
||||||
val connection: Connection?,
|
val connection: Connection,
|
||||||
val authentication: Authentication?,
|
val authentication: Authentication?,
|
||||||
val connectionTimeout: Duration?,
|
val connectionTimeout: Duration?,
|
||||||
val maxConnections: Int,
|
val maxConnections: Int,
|
||||||
|
@@ -4,9 +4,7 @@ import io.netty.bootstrap.Bootstrap
|
|||||||
import io.netty.buffer.ByteBuf
|
import io.netty.buffer.ByteBuf
|
||||||
import io.netty.buffer.Unpooled
|
import io.netty.buffer.Unpooled
|
||||||
import io.netty.channel.Channel
|
import io.netty.channel.Channel
|
||||||
import io.netty.channel.ChannelHandler
|
|
||||||
import io.netty.channel.ChannelHandlerContext
|
import io.netty.channel.ChannelHandlerContext
|
||||||
import io.netty.channel.ChannelInboundHandlerAdapter
|
|
||||||
import io.netty.channel.ChannelOption
|
import io.netty.channel.ChannelOption
|
||||||
import io.netty.channel.ChannelPipeline
|
import io.netty.channel.ChannelPipeline
|
||||||
import io.netty.channel.SimpleChannelInboundHandler
|
import io.netty.channel.SimpleChannelInboundHandler
|
||||||
@@ -55,7 +53,7 @@ import kotlin.random.Random
|
|||||||
import io.netty.util.concurrent.Future as NettyFuture
|
import io.netty.util.concurrent.Future as NettyFuture
|
||||||
|
|
||||||
class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoCloseable {
|
class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoCloseable {
|
||||||
companion object{
|
companion object {
|
||||||
private val log = createLogger<RemoteBuildCacheClient>()
|
private val log = createLogger<RemoteBuildCacheClient>()
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -73,7 +71,7 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
|
|||||||
*tlsClientAuthenticationCredentials.certificateChain
|
*tlsClientAuthenticationCredentials.certificateChain
|
||||||
)
|
)
|
||||||
profile.tlsTruststore?.let { trustStore ->
|
profile.tlsTruststore?.let { trustStore ->
|
||||||
if(!trustStore.verifyServerCertificate) {
|
if (!trustStore.verifyServerCertificate) {
|
||||||
trustManager(object : X509TrustManager {
|
trustManager(object : X509TrustManager {
|
||||||
override fun checkClientTrusted(certChain: Array<out X509Certificate>, p1: String?) {
|
override fun checkClientTrusted(certChain: Array<out X509Certificate>, p1: String?) {
|
||||||
}
|
}
|
||||||
@@ -176,7 +174,7 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
|
|||||||
|
|
||||||
// HTTP handlers
|
// HTTP handlers
|
||||||
pipeline.addLast("codec", HttpClientCodec())
|
pipeline.addLast("codec", HttpClientCodec())
|
||||||
if(profile.compressionEnabled) {
|
if (profile.compressionEnabled) {
|
||||||
pipeline.addLast("decompressor", HttpContentDecompressor())
|
pipeline.addLast("decompressor", HttpContentDecompressor())
|
||||||
}
|
}
|
||||||
pipeline.addLast("aggregator", HttpObjectAggregator(134217728))
|
pipeline.addLast("aggregator", HttpObjectAggregator(134217728))
|
||||||
@@ -297,48 +295,32 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
|
|||||||
// Custom handler for processing responses
|
// Custom handler for processing responses
|
||||||
|
|
||||||
pool.acquire().addListener(object : GenericFutureListener<NettyFuture<Channel>> {
|
pool.acquire().addListener(object : GenericFutureListener<NettyFuture<Channel>> {
|
||||||
private val handlers = mutableListOf<ChannelHandler>()
|
|
||||||
|
|
||||||
fun cleanup(channel: Channel, pipeline: ChannelPipeline) {
|
|
||||||
handlers.forEach(pipeline::remove)
|
|
||||||
pool.release(channel)
|
|
||||||
}
|
|
||||||
|
|
||||||
override fun operationComplete(channelFuture: Future<Channel>) {
|
override fun operationComplete(channelFuture: Future<Channel>) {
|
||||||
if (channelFuture.isSuccess) {
|
if (channelFuture.isSuccess) {
|
||||||
val channel = channelFuture.now
|
val channel = channelFuture.now
|
||||||
val pipeline = channel.pipeline()
|
val pipeline = channel.pipeline()
|
||||||
val timeoutHandler = object : ChannelInboundHandlerAdapter() {
|
|
||||||
override fun userEventTriggered(ctx: ChannelHandlerContext, evt: Any) {
|
|
||||||
if (evt is IdleStateEvent) {
|
|
||||||
val te = when (evt.state()) {
|
|
||||||
IdleState.READER_IDLE -> TimeoutException(
|
|
||||||
"Read timeout",
|
|
||||||
)
|
|
||||||
|
|
||||||
IdleState.WRITER_IDLE -> TimeoutException("Write timeout")
|
|
||||||
|
|
||||||
IdleState.ALL_IDLE -> TimeoutException("Idle timeout")
|
|
||||||
null -> throw IllegalStateException("This should never happen")
|
|
||||||
}
|
|
||||||
responseFuture.completeExceptionally(te)
|
|
||||||
ctx.close()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
val closeListener = GenericFutureListener<Future<Void>> {
|
val closeListener = GenericFutureListener<Future<Void>> {
|
||||||
responseFuture.completeExceptionally(IOException("The remote server closed the connection"))
|
responseFuture.completeExceptionally(IOException("The remote server closed the connection"))
|
||||||
pool.release(channel)
|
|
||||||
}
|
}
|
||||||
|
channel.closeFuture().addListener(closeListener)
|
||||||
|
|
||||||
val responseHandler = object : SimpleChannelInboundHandler<FullHttpResponse>() {
|
val responseHandler = object : SimpleChannelInboundHandler<FullHttpResponse>() {
|
||||||
|
|
||||||
|
override fun handlerAdded(ctx: ChannelHandlerContext) {
|
||||||
|
channel.closeFuture().removeListener(closeListener)
|
||||||
|
}
|
||||||
|
|
||||||
override fun channelRead0(
|
override fun channelRead0(
|
||||||
ctx: ChannelHandlerContext,
|
ctx: ChannelHandlerContext,
|
||||||
response: FullHttpResponse
|
response: FullHttpResponse
|
||||||
) {
|
) {
|
||||||
channel.closeFuture().removeListener(closeListener)
|
pipeline.remove(this)
|
||||||
cleanup(channel, pipeline)
|
|
||||||
responseFuture.complete(response)
|
responseFuture.complete(response)
|
||||||
|
if(!profile.connection.requestPipelining) {
|
||||||
|
pool.release(channel)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
||||||
@@ -352,16 +334,39 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
|
|||||||
}
|
}
|
||||||
|
|
||||||
override fun channelInactive(ctx: ChannelHandlerContext) {
|
override fun channelInactive(ctx: ChannelHandlerContext) {
|
||||||
pool.release(channel)
|
|
||||||
responseFuture.completeExceptionally(IOException("The remote server closed the connection"))
|
responseFuture.completeExceptionally(IOException("The remote server closed the connection"))
|
||||||
|
if(!profile.connection.requestPipelining) {
|
||||||
|
pool.release(channel)
|
||||||
|
}
|
||||||
super.channelInactive(ctx)
|
super.channelInactive(ctx)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
override fun userEventTriggered(ctx: ChannelHandlerContext, evt: Any) {
|
||||||
|
if (evt is IdleStateEvent) {
|
||||||
|
val te = when (evt.state()) {
|
||||||
|
IdleState.READER_IDLE -> TimeoutException(
|
||||||
|
"Read timeout",
|
||||||
|
)
|
||||||
|
|
||||||
|
IdleState.WRITER_IDLE -> TimeoutException("Write timeout")
|
||||||
|
|
||||||
|
IdleState.ALL_IDLE -> TimeoutException("Idle timeout")
|
||||||
|
null -> throw IllegalStateException("This should never happen")
|
||||||
|
}
|
||||||
|
responseFuture.completeExceptionally(te)
|
||||||
|
super.userEventTriggered(ctx, evt)
|
||||||
|
if (this === pipeline.last()) {
|
||||||
|
ctx.close()
|
||||||
|
}
|
||||||
|
if(!profile.connection.requestPipelining) {
|
||||||
|
pool.release(channel)
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
super.userEventTriggered(ctx, evt)
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
for (handler in arrayOf(timeoutHandler, responseHandler)) {
|
pipeline.addLast(responseHandler)
|
||||||
handlers.add(handler)
|
|
||||||
}
|
|
||||||
pipeline.addLast(timeoutHandler, responseHandler)
|
|
||||||
channel.closeFuture().addListener(closeListener)
|
|
||||||
|
|
||||||
|
|
||||||
// Prepare the HTTP request
|
// Prepare the HTTP request
|
||||||
@@ -373,13 +378,14 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
|
|||||||
uri.rawPath,
|
uri.rawPath,
|
||||||
content ?: Unpooled.buffer(0)
|
content ?: Unpooled.buffer(0)
|
||||||
).apply {
|
).apply {
|
||||||
|
// Set headers
|
||||||
headers().apply {
|
headers().apply {
|
||||||
if (content != null) {
|
if (content != null) {
|
||||||
set(HttpHeaderNames.CONTENT_LENGTH, content.readableBytes())
|
set(HttpHeaderNames.CONTENT_LENGTH, content.readableBytes())
|
||||||
}
|
}
|
||||||
set(HttpHeaderNames.HOST, profile.serverURI.host)
|
set(HttpHeaderNames.HOST, profile.serverURI.host)
|
||||||
set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE)
|
set(HttpHeaderNames.CONNECTION, HttpHeaderValues.KEEP_ALIVE)
|
||||||
if(profile.compressionEnabled) {
|
if (profile.compressionEnabled) {
|
||||||
set(
|
set(
|
||||||
HttpHeaderNames.ACCEPT_ENCODING,
|
HttpHeaderNames.ACCEPT_ENCODING,
|
||||||
HttpHeaderValues.GZIP.toString() + "," + HttpHeaderValues.DEFLATE.toString()
|
HttpHeaderValues.GZIP.toString() + "," + HttpHeaderValues.DEFLATE.toString()
|
||||||
@@ -398,9 +404,16 @@ class RemoteBuildCacheClient(private val profile: Configuration.Profile) : AutoC
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Set headers
|
|
||||||
// Send the request
|
// Send the request
|
||||||
channel.writeAndFlush(request)
|
channel.writeAndFlush(request).addListener {
|
||||||
|
if(!it.isSuccess) {
|
||||||
|
val ex = it.cause()
|
||||||
|
log.warn(ex.message, ex)
|
||||||
|
}
|
||||||
|
if(profile.connection.requestPipelining) {
|
||||||
|
pool.release(channel)
|
||||||
|
}
|
||||||
|
}
|
||||||
} else {
|
} else {
|
||||||
responseFuture.completeExceptionally(channelFuture.cause())
|
responseFuture.completeExceptionally(channelFuture.cause())
|
||||||
}
|
}
|
||||||
|
@@ -30,7 +30,12 @@ object Parser {
|
|||||||
?: throw ConfigurationException("base-url attribute is required")
|
?: throw ConfigurationException("base-url attribute is required")
|
||||||
var authentication: Configuration.Authentication? = null
|
var authentication: Configuration.Authentication? = null
|
||||||
var retryPolicy: Configuration.RetryPolicy? = null
|
var retryPolicy: Configuration.RetryPolicy? = null
|
||||||
var connection : Configuration.Connection? = null
|
var connection : Configuration.Connection = Configuration.Connection(
|
||||||
|
Duration.ofSeconds(60),
|
||||||
|
Duration.ofSeconds(60),
|
||||||
|
Duration.ofSeconds(30),
|
||||||
|
false
|
||||||
|
)
|
||||||
var trustStore : Configuration.TrustStore? = null
|
var trustStore : Configuration.TrustStore? = null
|
||||||
for (gchild in child.asIterable()) {
|
for (gchild in child.asIterable()) {
|
||||||
when (gchild.localName) {
|
when (gchild.localName) {
|
||||||
@@ -97,10 +102,13 @@ object Parser {
|
|||||||
?.let(Duration::parse) ?: Duration.of(60, ChronoUnit.SECONDS)
|
?.let(Duration::parse) ?: Duration.of(60, ChronoUnit.SECONDS)
|
||||||
val writeIdleTimeout = gchild.renderAttribute("write-idle-timeout")
|
val writeIdleTimeout = gchild.renderAttribute("write-idle-timeout")
|
||||||
?.let(Duration::parse) ?: Duration.of(60, ChronoUnit.SECONDS)
|
?.let(Duration::parse) ?: Duration.of(60, ChronoUnit.SECONDS)
|
||||||
|
val requestPipelining = gchild.renderAttribute("request-pipelining")
|
||||||
|
?.let(String::toBoolean) ?: false
|
||||||
connection = Configuration.Connection(
|
connection = Configuration.Connection(
|
||||||
readIdleTimeout,
|
readIdleTimeout,
|
||||||
writeIdleTimeout,
|
writeIdleTimeout,
|
||||||
idleTimeout,
|
idleTimeout,
|
||||||
|
requestPipelining
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -123,6 +123,13 @@
|
|||||||
</xs:documentation>
|
</xs:documentation>
|
||||||
</xs:annotation>
|
</xs:annotation>
|
||||||
</xs:attribute>
|
</xs:attribute>
|
||||||
|
<xs:attribute name="request-pipelining" type="xs:boolean" use="optional" default="false">
|
||||||
|
<xs:annotation>
|
||||||
|
<xs:documentation>
|
||||||
|
Enables HTTP/1.1 request pipelining
|
||||||
|
</xs:documentation>
|
||||||
|
</xs:annotation>
|
||||||
|
</xs:attribute>
|
||||||
</xs:complexType>
|
</xs:complexType>
|
||||||
|
|
||||||
<xs:complexType name="noAuthType">
|
<xs:complexType name="noAuthType">
|
||||||
|
@@ -6,7 +6,7 @@ plugins {
|
|||||||
}
|
}
|
||||||
|
|
||||||
dependencies {
|
dependencies {
|
||||||
implementation project(':rbcs-api')
|
implementation catalog.netty.transport
|
||||||
implementation catalog.slf4j.api
|
implementation catalog.slf4j.api
|
||||||
implementation catalog.jwo
|
implementation catalog.jwo
|
||||||
implementation catalog.netty.buffer
|
implementation catalog.netty.buffer
|
||||||
|
@@ -22,7 +22,7 @@ The plugins currently supports the following configuration attributes:
|
|||||||
- `digest`: digest algorithm to use on the key before submission
|
- `digest`: digest algorithm to use on the key before submission
|
||||||
to memcache (optional, no digest is applied if omitted)
|
to memcache (optional, no digest is applied if omitted)
|
||||||
- `compression`: compression algorithm to apply to cache values before,
|
- `compression`: compression algorithm to apply to cache values before,
|
||||||
currently only `deflate` is supported (optionla, if omitted compression is disabled)
|
currently only `deflate` is supported (optional, if omitted compression is disabled)
|
||||||
- `compression-level`: compression level to use, deflate supports compression levels from 1 to 9,
|
- `compression-level`: compression level to use, deflate supports compression levels from 1 to 9,
|
||||||
where 1 is for fast compression at the expense of speed (optional, 6 is used if omitted)
|
where 1 is for fast compression at the expense of speed (optional, 6 is used if omitted)
|
||||||
```xml
|
```xml
|
||||||
@@ -37,8 +37,7 @@ The plugins currently supports the following configuration attributes:
|
|||||||
max-age="P7D"
|
max-age="P7D"
|
||||||
digest="SHA-256"
|
digest="SHA-256"
|
||||||
compression-mode="deflate"
|
compression-mode="deflate"
|
||||||
compression-level="6"
|
compression-level="6">
|
||||||
chunk-size="0x10000">
|
|
||||||
<server host="127.0.0.1" port="11211" max-connections="256"/>
|
<server host="127.0.0.1" port="11211" max-connections="256"/>
|
||||||
<server host="127.0.0.1" port="11212" max-connections="256"/>
|
<server host="127.0.0.1" port="11212" max-connections="256"/>
|
||||||
</cache>
|
</cache>
|
||||||
|
@@ -1,14 +1,15 @@
|
|||||||
package net.woggioni.rbcs.server.memcache
|
package net.woggioni.rbcs.server.memcache
|
||||||
|
|
||||||
import io.netty.channel.ChannelFactory
|
import io.netty.channel.ChannelFactory
|
||||||
import io.netty.channel.ChannelHandler
|
|
||||||
import io.netty.channel.EventLoopGroup
|
import io.netty.channel.EventLoopGroup
|
||||||
import io.netty.channel.pool.FixedChannelPool
|
import io.netty.channel.pool.FixedChannelPool
|
||||||
import io.netty.channel.socket.DatagramChannel
|
import io.netty.channel.socket.DatagramChannel
|
||||||
import io.netty.channel.socket.SocketChannel
|
import io.netty.channel.socket.SocketChannel
|
||||||
|
import net.woggioni.rbcs.api.CacheHandler
|
||||||
import net.woggioni.rbcs.api.CacheHandlerFactory
|
import net.woggioni.rbcs.api.CacheHandlerFactory
|
||||||
import net.woggioni.rbcs.api.Configuration
|
import net.woggioni.rbcs.api.Configuration
|
||||||
import net.woggioni.rbcs.common.HostAndPort
|
import net.woggioni.rbcs.common.HostAndPort
|
||||||
|
import net.woggioni.rbcs.common.createLogger
|
||||||
import net.woggioni.rbcs.server.memcache.client.MemcacheClient
|
import net.woggioni.rbcs.server.memcache.client.MemcacheClient
|
||||||
import java.time.Duration
|
import java.time.Duration
|
||||||
import java.util.concurrent.CompletableFuture
|
import java.util.concurrent.CompletableFuture
|
||||||
@@ -22,9 +23,12 @@ data class MemcacheCacheConfiguration(
|
|||||||
val digestAlgorithm: String? = null,
|
val digestAlgorithm: String? = null,
|
||||||
val compressionMode: CompressionMode? = null,
|
val compressionMode: CompressionMode? = null,
|
||||||
val compressionLevel: Int,
|
val compressionLevel: Int,
|
||||||
val chunkSize: Int
|
|
||||||
) : Configuration.Cache {
|
) : Configuration.Cache {
|
||||||
|
|
||||||
|
companion object {
|
||||||
|
private val log = createLogger<MemcacheCacheConfiguration>()
|
||||||
|
}
|
||||||
|
|
||||||
enum class CompressionMode {
|
enum class CompressionMode {
|
||||||
/**
|
/**
|
||||||
* Deflate mode
|
* Deflate mode
|
||||||
@@ -43,14 +47,15 @@ data class MemcacheCacheConfiguration(
|
|||||||
private val connectionPoolMap = ConcurrentHashMap<HostAndPort, FixedChannelPool>()
|
private val connectionPoolMap = ConcurrentHashMap<HostAndPort, FixedChannelPool>()
|
||||||
|
|
||||||
override fun newHandler(
|
override fun newHandler(
|
||||||
|
cfg : Configuration,
|
||||||
eventLoop: EventLoopGroup,
|
eventLoop: EventLoopGroup,
|
||||||
socketChannelFactory: ChannelFactory<SocketChannel>,
|
socketChannelFactory: ChannelFactory<SocketChannel>,
|
||||||
datagramChannelFactory: ChannelFactory<DatagramChannel>
|
datagramChannelFactory: ChannelFactory<DatagramChannel>,
|
||||||
): ChannelHandler {
|
): CacheHandler {
|
||||||
return MemcacheCacheHandler(
|
return MemcacheCacheHandler(
|
||||||
MemcacheClient(
|
MemcacheClient(
|
||||||
this@MemcacheCacheConfiguration.servers,
|
this@MemcacheCacheConfiguration.servers,
|
||||||
chunkSize,
|
cfg.connection.chunkSize,
|
||||||
eventLoop,
|
eventLoop,
|
||||||
socketChannelFactory,
|
socketChannelFactory,
|
||||||
connectionPoolMap
|
connectionPoolMap
|
||||||
@@ -58,7 +63,7 @@ data class MemcacheCacheConfiguration(
|
|||||||
digestAlgorithm,
|
digestAlgorithm,
|
||||||
compressionMode != null,
|
compressionMode != null,
|
||||||
compressionLevel,
|
compressionLevel,
|
||||||
chunkSize,
|
cfg.connection.chunkSize,
|
||||||
maxAge
|
maxAge
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
@@ -69,15 +74,19 @@ data class MemcacheCacheConfiguration(
|
|||||||
val pools = connectionPoolMap.values.toList()
|
val pools = connectionPoolMap.values.toList()
|
||||||
val npools = pools.size
|
val npools = pools.size
|
||||||
val finished = AtomicInteger(0)
|
val finished = AtomicInteger(0)
|
||||||
pools.forEach { pool ->
|
if (pools.isEmpty()) {
|
||||||
pool.closeAsync().addListener {
|
complete(null)
|
||||||
if (!it.isSuccess) {
|
} else {
|
||||||
failure.compareAndSet(null, it.cause())
|
pools.forEach { pool ->
|
||||||
}
|
pool.closeAsync().addListener {
|
||||||
if(finished.incrementAndGet() == npools) {
|
if (!it.isSuccess) {
|
||||||
when(val ex = failure.get()) {
|
failure.compareAndSet(null, it.cause())
|
||||||
null -> complete(null)
|
}
|
||||||
else -> completeExceptionally(ex)
|
if (finished.incrementAndGet() == npools) {
|
||||||
|
when (val ex = failure.get()) {
|
||||||
|
null -> complete(null)
|
||||||
|
else -> completeExceptionally(ex)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@@ -4,7 +4,6 @@ import io.netty.buffer.ByteBuf
|
|||||||
import io.netty.buffer.ByteBufAllocator
|
import io.netty.buffer.ByteBufAllocator
|
||||||
import io.netty.buffer.CompositeByteBuf
|
import io.netty.buffer.CompositeByteBuf
|
||||||
import io.netty.channel.ChannelHandlerContext
|
import io.netty.channel.ChannelHandlerContext
|
||||||
import io.netty.channel.SimpleChannelInboundHandler
|
|
||||||
import io.netty.handler.codec.memcache.DefaultLastMemcacheContent
|
import io.netty.handler.codec.memcache.DefaultLastMemcacheContent
|
||||||
import io.netty.handler.codec.memcache.DefaultMemcacheContent
|
import io.netty.handler.codec.memcache.DefaultMemcacheContent
|
||||||
import io.netty.handler.codec.memcache.LastMemcacheContent
|
import io.netty.handler.codec.memcache.LastMemcacheContent
|
||||||
@@ -13,6 +12,7 @@ import io.netty.handler.codec.memcache.binary.BinaryMemcacheOpcodes
|
|||||||
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponse
|
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponse
|
||||||
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponseStatus
|
import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponseStatus
|
||||||
import io.netty.handler.codec.memcache.binary.DefaultBinaryMemcacheRequest
|
import io.netty.handler.codec.memcache.binary.DefaultBinaryMemcacheRequest
|
||||||
|
import net.woggioni.rbcs.api.CacheHandler
|
||||||
import net.woggioni.rbcs.api.CacheValueMetadata
|
import net.woggioni.rbcs.api.CacheValueMetadata
|
||||||
import net.woggioni.rbcs.api.exception.ContentTooLargeException
|
import net.woggioni.rbcs.api.exception.ContentTooLargeException
|
||||||
import net.woggioni.rbcs.api.message.CacheMessage
|
import net.woggioni.rbcs.api.message.CacheMessage
|
||||||
@@ -58,7 +58,7 @@ class MemcacheCacheHandler(
|
|||||||
private val compressionLevel: Int,
|
private val compressionLevel: Int,
|
||||||
private val chunkSize: Int,
|
private val chunkSize: Int,
|
||||||
private val maxAge: Duration
|
private val maxAge: Duration
|
||||||
) : SimpleChannelInboundHandler<CacheMessage>() {
|
) : CacheHandler() {
|
||||||
companion object {
|
companion object {
|
||||||
private val log = createLogger<MemcacheCacheHandler>()
|
private val log = createLogger<MemcacheCacheHandler>()
|
||||||
|
|
||||||
@@ -69,10 +69,14 @@ class MemcacheCacheHandler(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private interface InProgressRequest {
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
private inner class InProgressGetRequest(
|
private inner class InProgressGetRequest(
|
||||||
private val key: String,
|
val key: String,
|
||||||
private val ctx: ChannelHandlerContext
|
private val ctx: ChannelHandlerContext
|
||||||
) {
|
) : InProgressRequest {
|
||||||
private val acc = ctx.alloc().compositeBuffer()
|
private val acc = ctx.alloc().compositeBuffer()
|
||||||
private val chunk = ctx.alloc().compositeBuffer()
|
private val chunk = ctx.alloc().compositeBuffer()
|
||||||
private val outputStream = ByteBufOutputStream(chunk).let {
|
private val outputStream = ByteBufOutputStream(chunk).let {
|
||||||
@@ -98,32 +102,35 @@ class MemcacheCacheHandler(
|
|||||||
acc.retain()
|
acc.retain()
|
||||||
it.readObject() as CacheValueMetadata
|
it.readObject() as CacheValueMetadata
|
||||||
}
|
}
|
||||||
ctx.writeAndFlush(CacheValueFoundResponse(key, metadata))
|
log.trace(ctx) {
|
||||||
|
"Sending response from cache"
|
||||||
|
}
|
||||||
|
sendMessageAndFlush(ctx, CacheValueFoundResponse(key, metadata))
|
||||||
responseSent = true
|
responseSent = true
|
||||||
acc.readerIndex(Int.SIZE_BYTES + mSize)
|
acc.readerIndex(Int.SIZE_BYTES + mSize)
|
||||||
}
|
}
|
||||||
if (responseSent) {
|
if (responseSent) {
|
||||||
acc.readBytes(outputStream, acc.readableBytes())
|
acc.readBytes(outputStream, acc.readableBytes())
|
||||||
if(acc.readableBytes() >= chunkSize) {
|
if (acc.readableBytes() >= chunkSize) {
|
||||||
flush(false)
|
flush(false)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private fun flush(last : Boolean) {
|
private fun flush(last: Boolean) {
|
||||||
val toSend = extractChunk(chunk, ctx.alloc())
|
val toSend = extractChunk(chunk, ctx.alloc())
|
||||||
val msg = if(last) {
|
val msg = if (last) {
|
||||||
log.trace(ctx) {
|
log.trace(ctx) {
|
||||||
"Sending last chunk to client on channel ${ctx.channel().id().asShortText()}"
|
"Sending last chunk to client"
|
||||||
}
|
}
|
||||||
LastCacheContent(toSend)
|
LastCacheContent(toSend)
|
||||||
} else {
|
} else {
|
||||||
log.trace(ctx) {
|
log.trace(ctx) {
|
||||||
"Sending chunk to client on channel ${ctx.channel().id().asShortText()}"
|
"Sending chunk to client"
|
||||||
}
|
}
|
||||||
CacheContent(toSend)
|
CacheContent(toSend)
|
||||||
}
|
}
|
||||||
ctx.writeAndFlush(msg)
|
sendMessageAndFlush(ctx, msg)
|
||||||
}
|
}
|
||||||
|
|
||||||
fun commit() {
|
fun commit() {
|
||||||
@@ -141,14 +148,14 @@ class MemcacheCacheHandler(
|
|||||||
}
|
}
|
||||||
|
|
||||||
private inner class InProgressPutRequest(
|
private inner class InProgressPutRequest(
|
||||||
private val ch : NettyChannel,
|
private val ch: NettyChannel,
|
||||||
metadata : CacheValueMetadata,
|
metadata: CacheValueMetadata,
|
||||||
val digest : ByteBuf,
|
val digest: ByteBuf,
|
||||||
val requestController: CompletableFuture<MemcacheRequestController>,
|
val requestController: CompletableFuture<MemcacheRequestController>,
|
||||||
private val alloc: ByteBufAllocator
|
private val alloc: ByteBufAllocator
|
||||||
) {
|
) : InProgressRequest {
|
||||||
private var totalSize = 0
|
private var totalSize = 0
|
||||||
private var tmpFile : FileChannel? = null
|
private var tmpFile: FileChannel? = null
|
||||||
private val accumulator = alloc.compositeBuffer()
|
private val accumulator = alloc.compositeBuffer()
|
||||||
private val stream = ByteBufOutputStream(accumulator).let {
|
private val stream = ByteBufOutputStream(accumulator).let {
|
||||||
if (compressionEnabled) {
|
if (compressionEnabled) {
|
||||||
@@ -175,7 +182,7 @@ class MemcacheCacheHandler(
|
|||||||
tmpFile?.let {
|
tmpFile?.let {
|
||||||
flushToDisk(it, accumulator)
|
flushToDisk(it, accumulator)
|
||||||
}
|
}
|
||||||
if(accumulator.readableBytes() > 0x100000) {
|
if (accumulator.readableBytes() > 0x100000) {
|
||||||
log.debug(ch) {
|
log.debug(ch) {
|
||||||
"Entry is too big, buffering it into a file"
|
"Entry is too big, buffering it into a file"
|
||||||
}
|
}
|
||||||
@@ -192,18 +199,18 @@ class MemcacheCacheHandler(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private fun flushToDisk(fc : FileChannel, buf : CompositeByteBuf) {
|
private fun flushToDisk(fc: FileChannel, buf: CompositeByteBuf) {
|
||||||
val chunk = extractChunk(buf, alloc)
|
val chunk = extractChunk(buf, alloc)
|
||||||
fc.write(chunk.nioBuffer())
|
fc.write(chunk.nioBuffer())
|
||||||
chunk.release()
|
chunk.release()
|
||||||
}
|
}
|
||||||
|
|
||||||
fun commit() : Pair<Int, ReadableByteChannel> {
|
fun commit(): Pair<Int, ReadableByteChannel> {
|
||||||
digest.release()
|
digest.release()
|
||||||
accumulator.retain()
|
accumulator.retain()
|
||||||
stream.close()
|
stream.close()
|
||||||
val fileChannel = tmpFile
|
val fileChannel = tmpFile
|
||||||
return if(fileChannel != null) {
|
return if (fileChannel != null) {
|
||||||
flushToDisk(fileChannel, accumulator)
|
flushToDisk(fileChannel, accumulator)
|
||||||
accumulator.release()
|
accumulator.release()
|
||||||
fileChannel.position(0)
|
fileChannel.position(0)
|
||||||
@@ -224,8 +231,7 @@ class MemcacheCacheHandler(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private var inProgressPutRequest: InProgressPutRequest? = null
|
private var inProgressRequest: InProgressRequest? = null
|
||||||
private var inProgressGetRequest: InProgressGetRequest? = null
|
|
||||||
|
|
||||||
override fun channelRead0(ctx: ChannelHandlerContext, msg: CacheMessage) {
|
override fun channelRead0(ctx: ChannelHandlerContext, msg: CacheMessage) {
|
||||||
when (msg) {
|
when (msg) {
|
||||||
@@ -252,32 +258,39 @@ class MemcacheCacheHandler(
|
|||||||
log.debug(ctx) {
|
log.debug(ctx) {
|
||||||
"Cache hit for key ${msg.key} on memcache"
|
"Cache hit for key ${msg.key} on memcache"
|
||||||
}
|
}
|
||||||
inProgressGetRequest = InProgressGetRequest(msg.key, ctx)
|
inProgressRequest = InProgressGetRequest(msg.key, ctx)
|
||||||
}
|
}
|
||||||
|
|
||||||
BinaryMemcacheResponseStatus.KEY_ENOENT -> {
|
BinaryMemcacheResponseStatus.KEY_ENOENT -> {
|
||||||
log.debug(ctx) {
|
log.debug(ctx) {
|
||||||
"Cache miss for key ${msg.key} on memcache"
|
"Cache miss for key ${msg.key} on memcache"
|
||||||
}
|
}
|
||||||
ctx.writeAndFlush(CacheValueNotFoundResponse())
|
sendMessageAndFlush(ctx, CacheValueNotFoundResponse())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun contentReceived(content: MemcacheContent) {
|
override fun contentReceived(content: MemcacheContent) {
|
||||||
log.trace(ctx) {
|
log.trace(ctx) {
|
||||||
"${if(content is LastMemcacheContent) "Last chunk" else "Chunk"} of ${content.content().readableBytes()} bytes received from memcache for key ${msg.key}"
|
"${if (content is LastMemcacheContent) "Last chunk" else "Chunk"} of ${
|
||||||
|
content.content().readableBytes()
|
||||||
|
} bytes received from memcache for key ${msg.key}"
|
||||||
}
|
}
|
||||||
inProgressGetRequest?.write(content.content())
|
(inProgressRequest as? InProgressGetRequest)?.let { inProgressGetRequest ->
|
||||||
if (content is LastMemcacheContent) {
|
inProgressGetRequest.write(content.content())
|
||||||
inProgressGetRequest?.commit()
|
if (content is LastMemcacheContent) {
|
||||||
|
inProgressRequest = null
|
||||||
|
inProgressGetRequest.commit()
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun exceptionCaught(ex: Throwable) {
|
override fun exceptionCaught(ex: Throwable) {
|
||||||
inProgressGetRequest?.let {
|
(inProgressRequest as? InProgressGetRequest).let { inProgressGetRequest ->
|
||||||
inProgressGetRequest = null
|
inProgressGetRequest?.let {
|
||||||
it.rollback()
|
inProgressRequest = null
|
||||||
|
it.rollback()
|
||||||
|
}
|
||||||
}
|
}
|
||||||
this@MemcacheCacheHandler.exceptionCaught(ctx, ex)
|
this@MemcacheCacheHandler.exceptionCaught(ctx, ex)
|
||||||
}
|
}
|
||||||
@@ -290,6 +303,7 @@ class MemcacheCacheHandler(
|
|||||||
setOpcode(BinaryMemcacheOpcodes.GET)
|
setOpcode(BinaryMemcacheOpcodes.GET)
|
||||||
}
|
}
|
||||||
requestHandle.sendRequest(request)
|
requestHandle.sendRequest(request)
|
||||||
|
requestHandle.sendContent(LastMemcacheContent.EMPTY_LAST_CONTENT)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -305,8 +319,9 @@ class MemcacheCacheHandler(
|
|||||||
log.debug(ctx) {
|
log.debug(ctx) {
|
||||||
"Inserted key ${msg.key} into memcache"
|
"Inserted key ${msg.key} into memcache"
|
||||||
}
|
}
|
||||||
ctx.writeAndFlush(CachePutResponse(msg.key))
|
sendMessageAndFlush(ctx, CachePutResponse(msg.key))
|
||||||
}
|
}
|
||||||
|
|
||||||
else -> this@MemcacheCacheHandler.exceptionCaught(ctx, MemcacheException(status))
|
else -> this@MemcacheCacheHandler.exceptionCaught(ctx, MemcacheException(status))
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -323,86 +338,103 @@ class MemcacheCacheHandler(
|
|||||||
this@MemcacheCacheHandler.exceptionCaught(ctx, ex)
|
this@MemcacheCacheHandler.exceptionCaught(ctx, ex)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
inProgressPutRequest = InProgressPutRequest(ctx.channel(), msg.metadata, key, requestController, ctx.alloc())
|
inProgressRequest = InProgressPutRequest(ctx.channel(), msg.metadata, key, requestController, ctx.alloc())
|
||||||
}
|
}
|
||||||
|
|
||||||
private fun handleCacheContent(ctx: ChannelHandlerContext, msg: CacheContent) {
|
private fun handleCacheContent(ctx: ChannelHandlerContext, msg: CacheContent) {
|
||||||
inProgressPutRequest?.let { request ->
|
val request = inProgressRequest
|
||||||
log.trace(ctx) {
|
when (request) {
|
||||||
"Received chunk of ${msg.content().readableBytes()} bytes for memcache"
|
is InProgressPutRequest -> {
|
||||||
|
log.trace(ctx) {
|
||||||
|
"Received chunk of ${msg.content().readableBytes()} bytes for memcache"
|
||||||
|
}
|
||||||
|
request.write(msg.content())
|
||||||
|
}
|
||||||
|
|
||||||
|
is InProgressGetRequest -> {
|
||||||
|
msg.release()
|
||||||
}
|
}
|
||||||
request.write(msg.content())
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private fun handleLastCacheContent(ctx: ChannelHandlerContext, msg: LastCacheContent) {
|
private fun handleLastCacheContent(ctx: ChannelHandlerContext, msg: LastCacheContent) {
|
||||||
inProgressPutRequest?.let { request ->
|
val request = inProgressRequest
|
||||||
inProgressPutRequest = null
|
when (request) {
|
||||||
log.trace(ctx) {
|
is InProgressPutRequest -> {
|
||||||
"Received last chunk of ${msg.content().readableBytes()} bytes for memcache"
|
inProgressRequest = null
|
||||||
}
|
log.trace(ctx) {
|
||||||
request.write(msg.content())
|
"Received last chunk of ${msg.content().readableBytes()} bytes for memcache"
|
||||||
val key = request.digest.retainedDuplicate()
|
}
|
||||||
val (payloadSize, payloadSource) = request.commit()
|
request.write(msg.content())
|
||||||
val extras = ctx.alloc().buffer(8, 8)
|
val key = request.digest.retainedDuplicate()
|
||||||
extras.writeInt(0)
|
val (payloadSize, payloadSource) = request.commit()
|
||||||
extras.writeInt(encodeExpiry(maxAge))
|
val extras = ctx.alloc().buffer(8, 8)
|
||||||
val totalBodyLength = request.digest.readableBytes() + extras.readableBytes() + payloadSize
|
extras.writeInt(0)
|
||||||
request.requestController.whenComplete { requestController, ex ->
|
extras.writeInt(encodeExpiry(maxAge))
|
||||||
if(ex == null) {
|
val totalBodyLength = request.digest.readableBytes() + extras.readableBytes() + payloadSize
|
||||||
log.trace(ctx) {
|
log.trace(ctx) {
|
||||||
"Sending SET request to memcache"
|
"Trying to send SET request to memcache"
|
||||||
}
|
}
|
||||||
requestController.sendRequest(DefaultBinaryMemcacheRequest().apply {
|
request.requestController.whenComplete { requestController, ex ->
|
||||||
setOpcode(BinaryMemcacheOpcodes.SET)
|
if (ex == null) {
|
||||||
setKey(key)
|
log.trace(ctx) {
|
||||||
setExtras(extras)
|
"Sending SET request to memcache"
|
||||||
setTotalBodyLength(totalBodyLength)
|
}
|
||||||
})
|
requestController.sendRequest(DefaultBinaryMemcacheRequest().apply {
|
||||||
log.trace(ctx) {
|
setOpcode(BinaryMemcacheOpcodes.SET)
|
||||||
"Sending request payload to memcache"
|
setKey(key)
|
||||||
}
|
setExtras(extras)
|
||||||
payloadSource.use { source ->
|
setTotalBodyLength(totalBodyLength)
|
||||||
val bb = ByteBuffer.allocate(chunkSize)
|
})
|
||||||
while (true) {
|
log.trace(ctx) {
|
||||||
val read = source.read(bb)
|
"Sending request payload to memcache"
|
||||||
bb.limit()
|
}
|
||||||
if(read >= 0 && bb.position() < chunkSize && bb.hasRemaining()) {
|
payloadSource.use { source ->
|
||||||
continue
|
val bb = ByteBuffer.allocate(chunkSize)
|
||||||
}
|
while (true) {
|
||||||
val chunk = ctx.alloc().buffer(chunkSize)
|
val read = source.read(bb)
|
||||||
bb.flip()
|
bb.limit()
|
||||||
chunk.writeBytes(bb)
|
if (read >= 0 && bb.position() < chunkSize && bb.hasRemaining()) {
|
||||||
bb.clear()
|
continue
|
||||||
log.trace(ctx) {
|
}
|
||||||
"Sending ${chunk.readableBytes()} bytes chunk to memcache"
|
val chunk = ctx.alloc().buffer(chunkSize)
|
||||||
}
|
bb.flip()
|
||||||
if(read < 0) {
|
chunk.writeBytes(bb)
|
||||||
requestController.sendContent(DefaultLastMemcacheContent(chunk))
|
bb.clear()
|
||||||
break
|
log.trace(ctx) {
|
||||||
} else {
|
"Sending ${chunk.readableBytes()} bytes chunk to memcache"
|
||||||
requestController.sendContent(DefaultMemcacheContent(chunk))
|
}
|
||||||
|
if (read < 0) {
|
||||||
|
requestController.sendContent(DefaultLastMemcacheContent(chunk))
|
||||||
|
break
|
||||||
|
} else {
|
||||||
|
requestController.sendContent(DefaultMemcacheContent(chunk))
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
} else {
|
||||||
|
payloadSource.close()
|
||||||
}
|
}
|
||||||
} else {
|
|
||||||
payloadSource.close()
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
||||||
inProgressGetRequest?.let {
|
val request = inProgressRequest
|
||||||
inProgressGetRequest = null
|
when (request) {
|
||||||
it.rollback()
|
is InProgressPutRequest -> {
|
||||||
}
|
inProgressRequest = null
|
||||||
inProgressPutRequest?.let {
|
request.requestController.thenAccept { controller ->
|
||||||
inProgressPutRequest = null
|
controller.exceptionCaught(cause)
|
||||||
it.requestController.thenAccept { controller ->
|
}
|
||||||
controller.exceptionCaught(cause)
|
request.rollback()
|
||||||
|
}
|
||||||
|
|
||||||
|
is InProgressGetRequest -> {
|
||||||
|
inProgressRequest = null
|
||||||
|
request.rollback()
|
||||||
}
|
}
|
||||||
it.rollback()
|
|
||||||
}
|
}
|
||||||
super.exceptionCaught(ctx, cause)
|
super.exceptionCaught(ctx, cause)
|
||||||
}
|
}
|
||||||
|
@@ -28,9 +28,6 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
|
|||||||
val maxAge = el.renderAttribute("max-age")
|
val maxAge = el.renderAttribute("max-age")
|
||||||
?.let(Duration::parse)
|
?.let(Duration::parse)
|
||||||
?: Duration.ofDays(1)
|
?: Duration.ofDays(1)
|
||||||
val chunkSize = el.renderAttribute("chunk-size")
|
|
||||||
?.let(Integer::decode)
|
|
||||||
?: 0x10000
|
|
||||||
val compressionLevel = el.renderAttribute("compression-level")
|
val compressionLevel = el.renderAttribute("compression-level")
|
||||||
?.let(Integer::decode)
|
?.let(Integer::decode)
|
||||||
?: -1
|
?: -1
|
||||||
@@ -63,8 +60,7 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
|
|||||||
maxAge,
|
maxAge,
|
||||||
digestAlgorithm,
|
digestAlgorithm,
|
||||||
compressionMode,
|
compressionMode,
|
||||||
compressionLevel,
|
compressionLevel
|
||||||
chunkSize
|
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -84,7 +80,6 @@ class MemcacheCacheProvider : CacheProvider<MemcacheCacheConfiguration> {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
attr("max-age", maxAge.toString())
|
attr("max-age", maxAge.toString())
|
||||||
attr("chunk-size", chunkSize.toString())
|
|
||||||
digestAlgorithm?.let { digestAlgorithm ->
|
digestAlgorithm?.let { digestAlgorithm ->
|
||||||
attr("digest", digestAlgorithm)
|
attr("digest", digestAlgorithm)
|
||||||
}
|
}
|
||||||
|
@@ -12,7 +12,6 @@ import io.netty.channel.ChannelPipeline
|
|||||||
import io.netty.channel.EventLoopGroup
|
import io.netty.channel.EventLoopGroup
|
||||||
import io.netty.channel.SimpleChannelInboundHandler
|
import io.netty.channel.SimpleChannelInboundHandler
|
||||||
import io.netty.channel.pool.AbstractChannelPoolHandler
|
import io.netty.channel.pool.AbstractChannelPoolHandler
|
||||||
import io.netty.channel.pool.ChannelPool
|
|
||||||
import io.netty.channel.pool.FixedChannelPool
|
import io.netty.channel.pool.FixedChannelPool
|
||||||
import io.netty.channel.socket.SocketChannel
|
import io.netty.channel.socket.SocketChannel
|
||||||
import io.netty.handler.codec.memcache.LastMemcacheContent
|
import io.netty.handler.codec.memcache.LastMemcacheContent
|
||||||
@@ -24,7 +23,7 @@ import io.netty.handler.codec.memcache.binary.BinaryMemcacheResponse
|
|||||||
import io.netty.util.concurrent.GenericFutureListener
|
import io.netty.util.concurrent.GenericFutureListener
|
||||||
import net.woggioni.rbcs.common.HostAndPort
|
import net.woggioni.rbcs.common.HostAndPort
|
||||||
import net.woggioni.rbcs.common.createLogger
|
import net.woggioni.rbcs.common.createLogger
|
||||||
import net.woggioni.rbcs.common.warn
|
import net.woggioni.rbcs.common.trace
|
||||||
import net.woggioni.rbcs.server.memcache.MemcacheCacheConfiguration
|
import net.woggioni.rbcs.server.memcache.MemcacheCacheConfiguration
|
||||||
import net.woggioni.rbcs.server.memcache.MemcacheCacheHandler
|
import net.woggioni.rbcs.server.memcache.MemcacheCacheHandler
|
||||||
import java.io.IOException
|
import java.io.IOException
|
||||||
@@ -94,18 +93,6 @@ class MemcacheClient(
|
|||||||
pool.acquire().addListener(object : GenericFutureListener<NettyFuture<Channel>> {
|
pool.acquire().addListener(object : GenericFutureListener<NettyFuture<Channel>> {
|
||||||
override fun operationComplete(channelFuture: NettyFuture<Channel>) {
|
override fun operationComplete(channelFuture: NettyFuture<Channel>) {
|
||||||
if (channelFuture.isSuccess) {
|
if (channelFuture.isSuccess) {
|
||||||
|
|
||||||
var requestSent = false
|
|
||||||
var requestBodySent = false
|
|
||||||
var requestFinished = false
|
|
||||||
var responseReceived = false
|
|
||||||
var responseBodyReceived = false
|
|
||||||
var responseFinished = false
|
|
||||||
var requestBodySize = 0
|
|
||||||
var requestBodyBytesSent = 0
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
val channel = channelFuture.now
|
val channel = channelFuture.now
|
||||||
var connectionClosedByTheRemoteServer = true
|
var connectionClosedByTheRemoteServer = true
|
||||||
val closeCallback = {
|
val closeCallback = {
|
||||||
@@ -113,14 +100,7 @@ class MemcacheClient(
|
|||||||
val ex = IOException("The memcache server closed the connection")
|
val ex = IOException("The memcache server closed the connection")
|
||||||
val completed = response.completeExceptionally(ex)
|
val completed = response.completeExceptionally(ex)
|
||||||
if(!completed) responseHandler.exceptionCaught(ex)
|
if(!completed) responseHandler.exceptionCaught(ex)
|
||||||
log.warn {
|
|
||||||
"RequestSent: $requestSent, RequestBodySent: $requestBodySent, " +
|
|
||||||
"RequestFinished: $requestFinished, ResponseReceived: $responseReceived, " +
|
|
||||||
"ResponseBodyReceived: $responseBodyReceived, ResponseFinished: $responseFinished, " +
|
|
||||||
"RequestBodySize: $requestBodySize, RequestBodyBytesSent: $requestBodyBytesSent"
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
pool.release(channel)
|
|
||||||
}
|
}
|
||||||
val closeListener = ChannelFutureListener {
|
val closeListener = ChannelFutureListener {
|
||||||
closeCallback()
|
closeCallback()
|
||||||
@@ -140,18 +120,14 @@ class MemcacheClient(
|
|||||||
when (msg) {
|
when (msg) {
|
||||||
is BinaryMemcacheResponse -> {
|
is BinaryMemcacheResponse -> {
|
||||||
responseHandler.responseReceived(msg)
|
responseHandler.responseReceived(msg)
|
||||||
responseReceived = true
|
|
||||||
}
|
}
|
||||||
|
|
||||||
is LastMemcacheContent -> {
|
is LastMemcacheContent -> {
|
||||||
responseFinished = true
|
|
||||||
responseHandler.contentReceived(msg)
|
responseHandler.contentReceived(msg)
|
||||||
pipeline.remove(this)
|
pipeline.remove(this)
|
||||||
pool.release(channel)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
is MemcacheContent -> {
|
is MemcacheContent -> {
|
||||||
responseBodyReceived = true
|
|
||||||
responseHandler.contentReceived(msg)
|
responseHandler.contentReceived(msg)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -165,35 +141,43 @@ class MemcacheClient(
|
|||||||
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
||||||
connectionClosedByTheRemoteServer = false
|
connectionClosedByTheRemoteServer = false
|
||||||
ctx.close()
|
ctx.close()
|
||||||
pool.release(channel)
|
|
||||||
responseHandler.exceptionCaught(cause)
|
responseHandler.exceptionCaught(cause)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
channel.pipeline()
|
channel.pipeline().addLast(handler)
|
||||||
.addLast("client-handler", handler)
|
|
||||||
response.complete(object : MemcacheRequestController {
|
response.complete(object : MemcacheRequestController {
|
||||||
|
private var channelReleased = false
|
||||||
|
|
||||||
override fun sendRequest(request: BinaryMemcacheRequest) {
|
override fun sendRequest(request: BinaryMemcacheRequest) {
|
||||||
requestBodySize = request.totalBodyLength() - request.keyLength() - request.extrasLength()
|
|
||||||
channel.writeAndFlush(request)
|
channel.writeAndFlush(request)
|
||||||
requestSent = true
|
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun sendContent(content: MemcacheContent) {
|
override fun sendContent(content: MemcacheContent) {
|
||||||
val size = content.content().readableBytes()
|
|
||||||
channel.writeAndFlush(content).addListener {
|
channel.writeAndFlush(content).addListener {
|
||||||
requestBodyBytesSent += size
|
|
||||||
requestBodySent = true
|
|
||||||
if(content is LastMemcacheContent) {
|
if(content is LastMemcacheContent) {
|
||||||
requestFinished = true
|
if(!channelReleased) {
|
||||||
|
pool.release(channel)
|
||||||
|
channelReleased = true
|
||||||
|
log.trace(channel) {
|
||||||
|
"Channel released"
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun exceptionCaught(ex: Throwable) {
|
override fun exceptionCaught(ex: Throwable) {
|
||||||
|
log.warn(ex.message, ex)
|
||||||
connectionClosedByTheRemoteServer = false
|
connectionClosedByTheRemoteServer = false
|
||||||
channel.close()
|
channel.close()
|
||||||
|
if(!channelReleased) {
|
||||||
|
pool.release(channel)
|
||||||
|
channelReleased = true
|
||||||
|
log.trace(channel) {
|
||||||
|
"Channel released"
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
} else {
|
} else {
|
||||||
|
@@ -24,6 +24,7 @@ module net.woggioni.rbcs.server {
|
|||||||
opens net.woggioni.rbcs.server;
|
opens net.woggioni.rbcs.server;
|
||||||
opens net.woggioni.rbcs.server.schema;
|
opens net.woggioni.rbcs.server.schema;
|
||||||
|
|
||||||
|
|
||||||
uses CacheProvider;
|
uses CacheProvider;
|
||||||
provides CacheProvider with FileSystemCacheProvider, InMemoryCacheProvider;
|
provides CacheProvider with FileSystemCacheProvider, InMemoryCacheProvider;
|
||||||
}
|
}
|
@@ -21,6 +21,7 @@ import io.netty.channel.socket.nio.NioSocketChannel
|
|||||||
import io.netty.handler.codec.compression.CompressionOptions
|
import io.netty.handler.codec.compression.CompressionOptions
|
||||||
import io.netty.handler.codec.http.DefaultHttpContent
|
import io.netty.handler.codec.http.DefaultHttpContent
|
||||||
import io.netty.handler.codec.http.HttpContentCompressor
|
import io.netty.handler.codec.http.HttpContentCompressor
|
||||||
|
import io.netty.handler.codec.http.HttpDecoderConfig
|
||||||
import io.netty.handler.codec.http.HttpHeaderNames
|
import io.netty.handler.codec.http.HttpHeaderNames
|
||||||
import io.netty.handler.codec.http.HttpRequest
|
import io.netty.handler.codec.http.HttpRequest
|
||||||
import io.netty.handler.codec.http.HttpServerCodec
|
import io.netty.handler.codec.http.HttpServerCodec
|
||||||
@@ -53,9 +54,9 @@ import net.woggioni.rbcs.server.auth.RoleAuthorizer
|
|||||||
import net.woggioni.rbcs.server.configuration.Parser
|
import net.woggioni.rbcs.server.configuration.Parser
|
||||||
import net.woggioni.rbcs.server.configuration.Serializer
|
import net.woggioni.rbcs.server.configuration.Serializer
|
||||||
import net.woggioni.rbcs.server.exception.ExceptionHandler
|
import net.woggioni.rbcs.server.exception.ExceptionHandler
|
||||||
|
import net.woggioni.rbcs.server.handler.BlackHoleRequestHandler
|
||||||
import net.woggioni.rbcs.server.handler.MaxRequestSizeHandler
|
import net.woggioni.rbcs.server.handler.MaxRequestSizeHandler
|
||||||
import net.woggioni.rbcs.server.handler.ServerHandler
|
import net.woggioni.rbcs.server.handler.ServerHandler
|
||||||
import net.woggioni.rbcs.server.handler.TraceHandler
|
|
||||||
import net.woggioni.rbcs.server.throttling.BucketManager
|
import net.woggioni.rbcs.server.throttling.BucketManager
|
||||||
import net.woggioni.rbcs.server.throttling.ThrottlingHandler
|
import net.woggioni.rbcs.server.throttling.ThrottlingHandler
|
||||||
import java.io.OutputStream
|
import java.io.OutputStream
|
||||||
@@ -298,6 +299,7 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
|
|||||||
"Closed connection ${ch.id().asShortText()} with ${ch.remoteAddress()}"
|
"Closed connection ${ch.id().asShortText()} with ${ch.remoteAddress()}"
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
ch.config().setAutoRead(false)
|
||||||
val pipeline = ch.pipeline()
|
val pipeline = ch.pipeline()
|
||||||
cfg.connection.also { conn ->
|
cfg.connection.also { conn ->
|
||||||
val readIdleTimeout = conn.readIdleTimeout.toMillis()
|
val readIdleTimeout = conn.readIdleTimeout.toMillis()
|
||||||
@@ -340,7 +342,10 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
|
|||||||
sslContext?.newHandler(ch.alloc())?.also {
|
sslContext?.newHandler(ch.alloc())?.also {
|
||||||
pipeline.addLast(SSL_HANDLER_NAME, it)
|
pipeline.addLast(SSL_HANDLER_NAME, it)
|
||||||
}
|
}
|
||||||
pipeline.addLast(HttpServerCodec())
|
val httpDecoderConfig = HttpDecoderConfig().apply {
|
||||||
|
maxChunkSize = cfg.connection.chunkSize
|
||||||
|
}
|
||||||
|
pipeline.addLast(HttpServerCodec(httpDecoderConfig))
|
||||||
pipeline.addLast(MaxRequestSizeHandler.NAME, MaxRequestSizeHandler(cfg.connection.maxRequestSize))
|
pipeline.addLast(MaxRequestSizeHandler.NAME, MaxRequestSizeHandler(cfg.connection.maxRequestSize))
|
||||||
pipeline.addLast(HttpChunkContentCompressor(1024))
|
pipeline.addLast(HttpChunkContentCompressor(1024))
|
||||||
pipeline.addLast(ChunkedWriteHandler())
|
pipeline.addLast(ChunkedWriteHandler())
|
||||||
@@ -351,13 +356,13 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
|
|||||||
|
|
||||||
val serverHandler = let {
|
val serverHandler = let {
|
||||||
val prefix = Path.of("/").resolve(Path.of(cfg.serverPath ?: "/"))
|
val prefix = Path.of("/").resolve(Path.of(cfg.serverPath ?: "/"))
|
||||||
ServerHandler(prefix)
|
ServerHandler(prefix) {
|
||||||
|
cacheHandlerFactory.newHandler(cfg, ch.eventLoop(), channelFactory, datagramChannelFactory)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
pipeline.addLast(eventExecutorGroup, ServerHandler.NAME, serverHandler)
|
pipeline.addLast(eventExecutorGroup, ServerHandler.NAME, serverHandler)
|
||||||
|
pipeline.addLast(ExceptionHandler.NAME, ExceptionHandler)
|
||||||
pipeline.addLast(cacheHandlerFactory.newHandler(ch.eventLoop(), channelFactory, datagramChannelFactory))
|
pipeline.addLast(BlackHoleRequestHandler.NAME, BlackHoleRequestHandler())
|
||||||
pipeline.addLast(TraceHandler)
|
|
||||||
pipeline.addLast(ExceptionHandler)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun asyncClose() = cacheHandlerFactory.asyncClose()
|
override fun asyncClose() = cacheHandlerFactory.asyncClose()
|
||||||
@@ -368,13 +373,14 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
|
|||||||
private val bossGroup: EventExecutorGroup,
|
private val bossGroup: EventExecutorGroup,
|
||||||
private val executorGroups: Iterable<EventExecutorGroup>,
|
private val executorGroups: Iterable<EventExecutorGroup>,
|
||||||
private val serverInitializer: AsyncCloseable,
|
private val serverInitializer: AsyncCloseable,
|
||||||
) : Future<Void> by from(closeFuture, executorGroups, serverInitializer) {
|
) : Future<Void> by from(closeFuture, bossGroup, executorGroups, serverInitializer) {
|
||||||
|
|
||||||
companion object {
|
companion object {
|
||||||
private val log = createLogger<ServerHandle>()
|
private val log = createLogger<ServerHandle>()
|
||||||
|
|
||||||
private fun from(
|
private fun from(
|
||||||
closeFuture: ChannelFuture,
|
closeFuture: ChannelFuture,
|
||||||
|
bossGroup: EventExecutorGroup,
|
||||||
executorGroups: Iterable<EventExecutorGroup>,
|
executorGroups: Iterable<EventExecutorGroup>,
|
||||||
serverInitializer: AsyncCloseable
|
serverInitializer: AsyncCloseable
|
||||||
): CompletableFuture<Void> {
|
): CompletableFuture<Void> {
|
||||||
@@ -382,22 +388,15 @@ class RemoteBuildCacheServer(private val cfg: Configuration) {
|
|||||||
closeFuture.addListener {
|
closeFuture.addListener {
|
||||||
val errors = mutableListOf<Throwable>()
|
val errors = mutableListOf<Throwable>()
|
||||||
val deadline = Instant.now().plusSeconds(20)
|
val deadline = Instant.now().plusSeconds(20)
|
||||||
try {
|
|
||||||
serverInitializer.close()
|
|
||||||
} catch (ex: Throwable) {
|
|
||||||
log.error(ex.message, ex)
|
|
||||||
errors.addLast(ex)
|
|
||||||
}
|
|
||||||
|
|
||||||
serverInitializer.asyncClose().whenComplete { _, ex ->
|
serverInitializer.asyncClose().whenCompleteAsync { _, ex ->
|
||||||
if(ex != null) {
|
if(ex != null) {
|
||||||
log.error(ex.message, ex)
|
log.error(ex.message, ex)
|
||||||
errors.addLast(ex)
|
errors.addLast(ex)
|
||||||
}
|
}
|
||||||
|
|
||||||
executorGroups.map {
|
executorGroups.forEach(EventExecutorGroup::shutdownGracefully)
|
||||||
it.shutdownGracefully()
|
bossGroup.terminationFuture().sync()
|
||||||
}
|
|
||||||
|
|
||||||
for (executorGroup in executorGroups) {
|
for (executorGroup in executorGroups) {
|
||||||
val future = executorGroup.terminationFuture()
|
val future = executorGroup.terminationFuture()
|
||||||
|
@@ -17,7 +17,6 @@ data class FileSystemCacheConfiguration(
|
|||||||
val digestAlgorithm : String?,
|
val digestAlgorithm : String?,
|
||||||
val compressionEnabled: Boolean,
|
val compressionEnabled: Boolean,
|
||||||
val compressionLevel: Int,
|
val compressionLevel: Int,
|
||||||
val chunkSize: Int,
|
|
||||||
) : Configuration.Cache {
|
) : Configuration.Cache {
|
||||||
|
|
||||||
override fun materialize() = object : CacheHandlerFactory {
|
override fun materialize() = object : CacheHandlerFactory {
|
||||||
@@ -26,10 +25,11 @@ data class FileSystemCacheConfiguration(
|
|||||||
override fun asyncClose() = cache.asyncClose()
|
override fun asyncClose() = cache.asyncClose()
|
||||||
|
|
||||||
override fun newHandler(
|
override fun newHandler(
|
||||||
|
cfg : Configuration,
|
||||||
eventLoop: EventLoopGroup,
|
eventLoop: EventLoopGroup,
|
||||||
socketChannelFactory: ChannelFactory<SocketChannel>,
|
socketChannelFactory: ChannelFactory<SocketChannel>,
|
||||||
datagramChannelFactory: ChannelFactory<DatagramChannel>
|
datagramChannelFactory: ChannelFactory<DatagramChannel>
|
||||||
) = FileSystemCacheHandler(cache, digestAlgorithm, compressionEnabled, compressionLevel, chunkSize)
|
) = FileSystemCacheHandler(cache, digestAlgorithm, compressionEnabled, compressionLevel, cfg.connection.chunkSize)
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun getNamespaceURI() = RBCS.RBCS_NAMESPACE_URI
|
override fun getNamespaceURI() = RBCS.RBCS_NAMESPACE_URI
|
||||||
|
@@ -2,9 +2,9 @@ package net.woggioni.rbcs.server.cache
|
|||||||
|
|
||||||
import io.netty.buffer.ByteBuf
|
import io.netty.buffer.ByteBuf
|
||||||
import io.netty.channel.ChannelHandlerContext
|
import io.netty.channel.ChannelHandlerContext
|
||||||
import io.netty.channel.SimpleChannelInboundHandler
|
|
||||||
import io.netty.handler.codec.http.LastHttpContent
|
import io.netty.handler.codec.http.LastHttpContent
|
||||||
import io.netty.handler.stream.ChunkedNioFile
|
import io.netty.handler.stream.ChunkedNioFile
|
||||||
|
import net.woggioni.rbcs.api.CacheHandler
|
||||||
import net.woggioni.rbcs.api.message.CacheMessage
|
import net.woggioni.rbcs.api.message.CacheMessage
|
||||||
import net.woggioni.rbcs.api.message.CacheMessage.CacheContent
|
import net.woggioni.rbcs.api.message.CacheMessage.CacheContent
|
||||||
import net.woggioni.rbcs.api.message.CacheMessage.CacheGetRequest
|
import net.woggioni.rbcs.api.message.CacheMessage.CacheGetRequest
|
||||||
@@ -26,12 +26,18 @@ class FileSystemCacheHandler(
|
|||||||
private val compressionEnabled: Boolean,
|
private val compressionEnabled: Boolean,
|
||||||
private val compressionLevel: Int,
|
private val compressionLevel: Int,
|
||||||
private val chunkSize: Int
|
private val chunkSize: Int
|
||||||
) : SimpleChannelInboundHandler<CacheMessage>() {
|
) : CacheHandler() {
|
||||||
|
|
||||||
|
private interface InProgressRequest{
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
private class InProgressGetRequest(val request : CacheGetRequest) : InProgressRequest
|
||||||
|
|
||||||
private inner class InProgressPutRequest(
|
private inner class InProgressPutRequest(
|
||||||
val key : String,
|
val key : String,
|
||||||
private val fileSink : FileSystemCache.FileSink
|
private val fileSink : FileSystemCache.FileSink
|
||||||
) {
|
) : InProgressRequest {
|
||||||
|
|
||||||
private val stream = Channels.newOutputStream(fileSink.channel).let {
|
private val stream = Channels.newOutputStream(fileSink.channel).let {
|
||||||
if (compressionEnabled) {
|
if (compressionEnabled) {
|
||||||
@@ -55,7 +61,7 @@ class FileSystemCacheHandler(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private var inProgressPutRequest: InProgressPutRequest? = null
|
private var inProgressRequest: InProgressRequest? = null
|
||||||
|
|
||||||
override fun channelRead0(ctx: ChannelHandlerContext, msg: CacheMessage) {
|
override fun channelRead0(ctx: ChannelHandlerContext, msg: CacheMessage) {
|
||||||
when (msg) {
|
when (msg) {
|
||||||
@@ -68,55 +74,64 @@ class FileSystemCacheHandler(
|
|||||||
}
|
}
|
||||||
|
|
||||||
private fun handleGetRequest(ctx: ChannelHandlerContext, msg: CacheGetRequest) {
|
private fun handleGetRequest(ctx: ChannelHandlerContext, msg: CacheGetRequest) {
|
||||||
val key = String(Base64.getUrlEncoder().encode(processCacheKey(msg.key, digestAlgorithm)))
|
inProgressRequest = InProgressGetRequest(msg)
|
||||||
cache.get(key)?.also { entryValue ->
|
|
||||||
ctx.writeAndFlush(CacheValueFoundResponse(msg.key, entryValue.metadata))
|
|
||||||
entryValue.channel.let { channel ->
|
|
||||||
if(compressionEnabled) {
|
|
||||||
InflaterInputStream(Channels.newInputStream(channel)).use { stream ->
|
|
||||||
|
|
||||||
outerLoop@
|
|
||||||
while (true) {
|
|
||||||
val buf = ctx.alloc().heapBuffer(chunkSize)
|
|
||||||
while(buf.readableBytes() < chunkSize) {
|
|
||||||
val read = buf.writeBytes(stream, chunkSize)
|
|
||||||
if(read < 0) {
|
|
||||||
ctx.writeAndFlush(LastCacheContent(buf))
|
|
||||||
break@outerLoop
|
|
||||||
}
|
|
||||||
}
|
|
||||||
ctx.writeAndFlush(CacheContent(buf))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
ctx.writeAndFlush(ChunkedNioFile(channel, entryValue.offset, entryValue.size - entryValue.offset, chunkSize))
|
|
||||||
ctx.writeAndFlush(LastHttpContent.EMPTY_LAST_CONTENT)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} ?: ctx.writeAndFlush(CacheValueNotFoundResponse())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
private fun handlePutRequest(ctx: ChannelHandlerContext, msg: CachePutRequest) {
|
private fun handlePutRequest(ctx: ChannelHandlerContext, msg: CachePutRequest) {
|
||||||
val key = String(Base64.getUrlEncoder().encode(processCacheKey(msg.key, digestAlgorithm)))
|
val key = String(Base64.getUrlEncoder().encode(processCacheKey(msg.key, digestAlgorithm)))
|
||||||
val sink = cache.put(key, msg.metadata)
|
val sink = cache.put(key, msg.metadata)
|
||||||
inProgressPutRequest = InProgressPutRequest(msg.key, sink)
|
inProgressRequest = InProgressPutRequest(msg.key, sink)
|
||||||
}
|
}
|
||||||
|
|
||||||
private fun handleCacheContent(ctx: ChannelHandlerContext, msg: CacheContent) {
|
private fun handleCacheContent(ctx: ChannelHandlerContext, msg: CacheContent) {
|
||||||
inProgressPutRequest!!.write(msg.content())
|
val request = inProgressRequest
|
||||||
|
if(request is InProgressPutRequest) {
|
||||||
|
request.write(msg.content())
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private fun handleLastCacheContent(ctx: ChannelHandlerContext, msg: LastCacheContent) {
|
private fun handleLastCacheContent(ctx: ChannelHandlerContext, msg: LastCacheContent) {
|
||||||
inProgressPutRequest?.let { request ->
|
when(val request = inProgressRequest) {
|
||||||
inProgressPutRequest = null
|
is InProgressPutRequest -> {
|
||||||
request.write(msg.content())
|
inProgressRequest = null
|
||||||
request.commit()
|
request.write(msg.content())
|
||||||
ctx.writeAndFlush(CachePutResponse(request.key))
|
request.commit()
|
||||||
|
sendMessageAndFlush(ctx, CachePutResponse(request.key))
|
||||||
|
}
|
||||||
|
is InProgressGetRequest -> {
|
||||||
|
val key = String(Base64.getUrlEncoder().encode(processCacheKey(request.request.key, digestAlgorithm)))
|
||||||
|
cache.get(key)?.also { entryValue ->
|
||||||
|
sendMessageAndFlush(ctx, CacheValueFoundResponse(request.request.key, entryValue.metadata))
|
||||||
|
entryValue.channel.let { channel ->
|
||||||
|
if(compressionEnabled) {
|
||||||
|
InflaterInputStream(Channels.newInputStream(channel)).use { stream ->
|
||||||
|
|
||||||
|
outerLoop@
|
||||||
|
while (true) {
|
||||||
|
val buf = ctx.alloc().heapBuffer(chunkSize)
|
||||||
|
while(buf.readableBytes() < chunkSize) {
|
||||||
|
val read = buf.writeBytes(stream, chunkSize)
|
||||||
|
if(read < 0) {
|
||||||
|
sendMessageAndFlush(ctx, LastCacheContent(buf))
|
||||||
|
break@outerLoop
|
||||||
|
}
|
||||||
|
}
|
||||||
|
sendMessageAndFlush(ctx, CacheContent(buf))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
sendMessage(ctx, ChunkedNioFile(channel, entryValue.offset, entryValue.size - entryValue.offset, chunkSize))
|
||||||
|
sendMessageAndFlush(ctx, LastHttpContent.EMPTY_LAST_CONTENT)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} ?: sendMessageAndFlush(ctx, CacheValueNotFoundResponse())
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
||||||
inProgressPutRequest?.rollback()
|
(inProgressRequest as? InProgressPutRequest)?.rollback()
|
||||||
super.exceptionCaught(ctx, cause)
|
super.exceptionCaught(ctx, cause)
|
||||||
}
|
}
|
||||||
}
|
}
|
@@ -31,9 +31,6 @@ class FileSystemCacheProvider : CacheProvider<FileSystemCacheConfiguration> {
|
|||||||
?.let(String::toInt)
|
?.let(String::toInt)
|
||||||
?: Deflater.DEFAULT_COMPRESSION
|
?: Deflater.DEFAULT_COMPRESSION
|
||||||
val digestAlgorithm = el.renderAttribute("digest")
|
val digestAlgorithm = el.renderAttribute("digest")
|
||||||
val chunkSize = el.renderAttribute("chunk-size")
|
|
||||||
?.let(Integer::decode)
|
|
||||||
?: 0x10000
|
|
||||||
|
|
||||||
return FileSystemCacheConfiguration(
|
return FileSystemCacheConfiguration(
|
||||||
path,
|
path,
|
||||||
@@ -41,7 +38,6 @@ class FileSystemCacheProvider : CacheProvider<FileSystemCacheConfiguration> {
|
|||||||
digestAlgorithm,
|
digestAlgorithm,
|
||||||
enableCompression,
|
enableCompression,
|
||||||
compressionLevel,
|
compressionLevel,
|
||||||
chunkSize
|
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -63,7 +59,6 @@ class FileSystemCacheProvider : CacheProvider<FileSystemCacheConfiguration> {
|
|||||||
}?.let {
|
}?.let {
|
||||||
attr("compression-level", it.toString())
|
attr("compression-level", it.toString())
|
||||||
}
|
}
|
||||||
attr("chunk-size", chunkSize.toString())
|
|
||||||
}
|
}
|
||||||
result
|
result
|
||||||
}
|
}
|
||||||
|
@@ -6,11 +6,11 @@ import net.woggioni.rbcs.api.CacheValueMetadata
|
|||||||
import net.woggioni.rbcs.common.createLogger
|
import net.woggioni.rbcs.common.createLogger
|
||||||
import java.time.Duration
|
import java.time.Duration
|
||||||
import java.time.Instant
|
import java.time.Instant
|
||||||
|
import java.util.PriorityQueue
|
||||||
import java.util.concurrent.CompletableFuture
|
import java.util.concurrent.CompletableFuture
|
||||||
import java.util.concurrent.ConcurrentHashMap
|
|
||||||
import java.util.concurrent.PriorityBlockingQueue
|
|
||||||
import java.util.concurrent.TimeUnit
|
import java.util.concurrent.TimeUnit
|
||||||
import java.util.concurrent.atomic.AtomicLong
|
import java.util.concurrent.locks.ReentrantReadWriteLock
|
||||||
|
import kotlin.concurrent.withLock
|
||||||
|
|
||||||
private class CacheKey(private val value: ByteArray) {
|
private class CacheKey(private val value: ByteArray) {
|
||||||
override fun equals(other: Any?) = if (other is CacheKey) {
|
override fun equals(other: Any?) = if (other is CacheKey) {
|
||||||
@@ -34,15 +34,17 @@ class InMemoryCache(
|
|||||||
private val log = createLogger<InMemoryCache>()
|
private val log = createLogger<InMemoryCache>()
|
||||||
}
|
}
|
||||||
|
|
||||||
private val size = AtomicLong()
|
private var mapSize : Long = 0
|
||||||
private val map = ConcurrentHashMap<CacheKey, CacheEntry>()
|
private val map = HashMap<CacheKey, CacheEntry>()
|
||||||
|
private val lock = ReentrantReadWriteLock()
|
||||||
|
private val cond = lock.writeLock().newCondition()
|
||||||
|
|
||||||
private class RemovalQueueElement(val key: CacheKey, val value: CacheEntry, val expiry: Instant) :
|
private class RemovalQueueElement(val key: CacheKey, val value: CacheEntry, val expiry: Instant) :
|
||||||
Comparable<RemovalQueueElement> {
|
Comparable<RemovalQueueElement> {
|
||||||
override fun compareTo(other: RemovalQueueElement) = expiry.compareTo(other.expiry)
|
override fun compareTo(other: RemovalQueueElement) = expiry.compareTo(other.expiry)
|
||||||
}
|
}
|
||||||
|
|
||||||
private val removalQueue = PriorityBlockingQueue<RemovalQueueElement>()
|
private val removalQueue = PriorityQueue<RemovalQueueElement>()
|
||||||
|
|
||||||
@Volatile
|
@Volatile
|
||||||
private var running = true
|
private var running = true
|
||||||
@@ -51,21 +53,32 @@ class InMemoryCache(
|
|||||||
init {
|
init {
|
||||||
Thread.ofVirtual().name("in-memory-cache-gc").start {
|
Thread.ofVirtual().name("in-memory-cache-gc").start {
|
||||||
try {
|
try {
|
||||||
while (running) {
|
lock.writeLock().withLock {
|
||||||
val el = removalQueue.poll(1, TimeUnit.SECONDS) ?: continue
|
while (running) {
|
||||||
val value = el.value
|
val el = removalQueue.poll()
|
||||||
val now = Instant.now()
|
if(el == null) {
|
||||||
if (now > el.expiry) {
|
cond.await(1000, TimeUnit.MILLISECONDS)
|
||||||
val removed = map.remove(el.key, value)
|
continue
|
||||||
if (removed) {
|
}
|
||||||
updateSizeAfterRemoval(value.content)
|
val value = el.value
|
||||||
//Decrease the reference count for map
|
val now = Instant.now()
|
||||||
value.content.release()
|
if (now > el.expiry) {
|
||||||
|
val removed = map.remove(el.key, value)
|
||||||
|
if (removed) {
|
||||||
|
updateSizeAfterRemoval(value.content)
|
||||||
|
//Decrease the reference count for map
|
||||||
|
value.content.release()
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
removalQueue.offer(el)
|
||||||
|
val interval = minOf(Duration.between(now, el.expiry), Duration.ofSeconds(1))
|
||||||
|
cond.await(interval.toMillis(), TimeUnit.MILLISECONDS)
|
||||||
}
|
}
|
||||||
} else {
|
|
||||||
removalQueue.put(el)
|
|
||||||
Thread.sleep(minOf(Duration.between(now, el.expiry), Duration.ofSeconds(1)))
|
|
||||||
}
|
}
|
||||||
|
map.forEach {
|
||||||
|
it.value.content.release()
|
||||||
|
}
|
||||||
|
map.clear()
|
||||||
}
|
}
|
||||||
complete(null)
|
complete(null)
|
||||||
} catch (ex: Throwable) {
|
} catch (ex: Throwable) {
|
||||||
@@ -77,7 +90,7 @@ class InMemoryCache(
|
|||||||
|
|
||||||
fun removeEldest(): Long {
|
fun removeEldest(): Long {
|
||||||
while (true) {
|
while (true) {
|
||||||
val el = removalQueue.take()
|
val el = removalQueue.poll() ?: return mapSize
|
||||||
val value = el.value
|
val value = el.value
|
||||||
val removed = map.remove(el.key, value)
|
val removed = map.remove(el.key, value)
|
||||||
if (removed) {
|
if (removed) {
|
||||||
@@ -90,18 +103,22 @@ class InMemoryCache(
|
|||||||
}
|
}
|
||||||
|
|
||||||
private fun updateSizeAfterRemoval(removed: ByteBuf): Long {
|
private fun updateSizeAfterRemoval(removed: ByteBuf): Long {
|
||||||
return size.updateAndGet { currentSize: Long ->
|
mapSize -= removed.readableBytes()
|
||||||
currentSize - removed.readableBytes()
|
return mapSize
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun asyncClose() : CompletableFuture<Void> {
|
override fun asyncClose() : CompletableFuture<Void> {
|
||||||
running = false
|
running = false
|
||||||
|
lock.writeLock().withLock {
|
||||||
|
cond.signal()
|
||||||
|
}
|
||||||
return closeFuture
|
return closeFuture
|
||||||
}
|
}
|
||||||
|
|
||||||
fun get(key: ByteArray) = map[CacheKey(key)]?.run {
|
fun get(key: ByteArray) = lock.readLock().withLock {
|
||||||
CacheEntry(metadata, content.retainedDuplicate())
|
map[CacheKey(key)]?.run {
|
||||||
|
CacheEntry(metadata, content.retainedDuplicate())
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
fun put(
|
fun put(
|
||||||
@@ -109,18 +126,18 @@ class InMemoryCache(
|
|||||||
value: CacheEntry,
|
value: CacheEntry,
|
||||||
) {
|
) {
|
||||||
val cacheKey = CacheKey(key)
|
val cacheKey = CacheKey(key)
|
||||||
val oldSize = map.put(cacheKey, value)?.let { old ->
|
lock.writeLock().withLock {
|
||||||
val result = old.content.readableBytes()
|
val oldSize = map.put(cacheKey, value)?.let { old ->
|
||||||
old.content.release()
|
val result = old.content.readableBytes()
|
||||||
result
|
old.content.release()
|
||||||
} ?: 0
|
result
|
||||||
val delta = value.content.readableBytes() - oldSize
|
} ?: 0
|
||||||
var newSize = size.updateAndGet { currentSize: Long ->
|
val delta = value.content.readableBytes() - oldSize
|
||||||
currentSize + delta
|
mapSize += delta
|
||||||
}
|
removalQueue.offer(RemovalQueueElement(cacheKey, value, Instant.now().plus(maxAge)))
|
||||||
removalQueue.put(RemovalQueueElement(cacheKey, value, Instant.now().plus(maxAge)))
|
while (mapSize > maxSize) {
|
||||||
while (newSize > maxSize) {
|
removeEldest()
|
||||||
newSize = removeEldest()
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
@@ -4,7 +4,6 @@ import io.netty.channel.ChannelFactory
|
|||||||
import io.netty.channel.EventLoopGroup
|
import io.netty.channel.EventLoopGroup
|
||||||
import io.netty.channel.socket.DatagramChannel
|
import io.netty.channel.socket.DatagramChannel
|
||||||
import io.netty.channel.socket.SocketChannel
|
import io.netty.channel.socket.SocketChannel
|
||||||
import io.netty.util.concurrent.Future
|
|
||||||
import net.woggioni.rbcs.api.CacheHandlerFactory
|
import net.woggioni.rbcs.api.CacheHandlerFactory
|
||||||
import net.woggioni.rbcs.api.Configuration
|
import net.woggioni.rbcs.api.Configuration
|
||||||
import net.woggioni.rbcs.common.RBCS
|
import net.woggioni.rbcs.common.RBCS
|
||||||
@@ -16,7 +15,6 @@ data class InMemoryCacheConfiguration(
|
|||||||
val digestAlgorithm : String?,
|
val digestAlgorithm : String?,
|
||||||
val compressionEnabled: Boolean,
|
val compressionEnabled: Boolean,
|
||||||
val compressionLevel: Int,
|
val compressionLevel: Int,
|
||||||
val chunkSize : Int
|
|
||||||
) : Configuration.Cache {
|
) : Configuration.Cache {
|
||||||
override fun materialize() = object : CacheHandlerFactory {
|
override fun materialize() = object : CacheHandlerFactory {
|
||||||
private val cache = InMemoryCache(maxAge, maxSize)
|
private val cache = InMemoryCache(maxAge, maxSize)
|
||||||
@@ -24,6 +22,7 @@ data class InMemoryCacheConfiguration(
|
|||||||
override fun asyncClose() = cache.asyncClose()
|
override fun asyncClose() = cache.asyncClose()
|
||||||
|
|
||||||
override fun newHandler(
|
override fun newHandler(
|
||||||
|
cfg : Configuration,
|
||||||
eventLoop: EventLoopGroup,
|
eventLoop: EventLoopGroup,
|
||||||
socketChannelFactory: ChannelFactory<SocketChannel>,
|
socketChannelFactory: ChannelFactory<SocketChannel>,
|
||||||
datagramChannelFactory: ChannelFactory<DatagramChannel>
|
datagramChannelFactory: ChannelFactory<DatagramChannel>
|
||||||
|
@@ -2,15 +2,9 @@ package net.woggioni.rbcs.server.cache
|
|||||||
|
|
||||||
import io.netty.buffer.ByteBuf
|
import io.netty.buffer.ByteBuf
|
||||||
import io.netty.channel.ChannelHandlerContext
|
import io.netty.channel.ChannelHandlerContext
|
||||||
import io.netty.channel.SimpleChannelInboundHandler
|
import net.woggioni.rbcs.api.CacheHandler
|
||||||
import net.woggioni.rbcs.api.message.CacheMessage
|
import net.woggioni.rbcs.api.message.CacheMessage
|
||||||
import net.woggioni.rbcs.api.message.CacheMessage.CacheContent
|
import net.woggioni.rbcs.api.message.CacheMessage.*
|
||||||
import net.woggioni.rbcs.api.message.CacheMessage.CacheGetRequest
|
|
||||||
import net.woggioni.rbcs.api.message.CacheMessage.CachePutRequest
|
|
||||||
import net.woggioni.rbcs.api.message.CacheMessage.CachePutResponse
|
|
||||||
import net.woggioni.rbcs.api.message.CacheMessage.CacheValueFoundResponse
|
|
||||||
import net.woggioni.rbcs.api.message.CacheMessage.CacheValueNotFoundResponse
|
|
||||||
import net.woggioni.rbcs.api.message.CacheMessage.LastCacheContent
|
|
||||||
import net.woggioni.rbcs.common.ByteBufOutputStream
|
import net.woggioni.rbcs.common.ByteBufOutputStream
|
||||||
import net.woggioni.rbcs.common.RBCS.processCacheKey
|
import net.woggioni.rbcs.common.RBCS.processCacheKey
|
||||||
import java.util.zip.Deflater
|
import java.util.zip.Deflater
|
||||||
@@ -22,9 +16,17 @@ class InMemoryCacheHandler(
|
|||||||
private val digestAlgorithm: String?,
|
private val digestAlgorithm: String?,
|
||||||
private val compressionEnabled: Boolean,
|
private val compressionEnabled: Boolean,
|
||||||
private val compressionLevel: Int
|
private val compressionLevel: Int
|
||||||
) : SimpleChannelInboundHandler<CacheMessage>() {
|
) : CacheHandler() {
|
||||||
|
|
||||||
private interface InProgressPutRequest : AutoCloseable {
|
private interface InProgressRequest : AutoCloseable {
|
||||||
|
}
|
||||||
|
|
||||||
|
private class InProgressGetRequest(val request : CacheGetRequest) : InProgressRequest {
|
||||||
|
override fun close() {
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private interface InProgressPutRequest : InProgressRequest {
|
||||||
val request: CachePutRequest
|
val request: CachePutRequest
|
||||||
val buf: ByteBuf
|
val buf: ByteBuf
|
||||||
|
|
||||||
@@ -33,18 +35,14 @@ class InMemoryCacheHandler(
|
|||||||
|
|
||||||
private inner class InProgressPlainPutRequest(ctx: ChannelHandlerContext, override val request: CachePutRequest) :
|
private inner class InProgressPlainPutRequest(ctx: ChannelHandlerContext, override val request: CachePutRequest) :
|
||||||
InProgressPutRequest {
|
InProgressPutRequest {
|
||||||
override val buf = ctx.alloc().compositeBuffer()
|
override val buf = ctx.alloc().compositeHeapBuffer()
|
||||||
|
|
||||||
private val stream = ByteBufOutputStream(buf).let {
|
|
||||||
if (compressionEnabled) {
|
|
||||||
DeflaterOutputStream(it, Deflater(compressionLevel))
|
|
||||||
} else {
|
|
||||||
it
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
override fun append(buf: ByteBuf) {
|
override fun append(buf: ByteBuf) {
|
||||||
this.buf.addComponent(true, buf.retain())
|
if(buf.isDirect) {
|
||||||
|
this.buf.writeBytes(buf)
|
||||||
|
} else {
|
||||||
|
this.buf.addComponent(true, buf.retain())
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun close() {
|
override fun close() {
|
||||||
@@ -72,7 +70,7 @@ class InMemoryCacheHandler(
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private var inProgressPutRequest: InProgressPutRequest? = null
|
private var inProgressRequest: InProgressRequest? = null
|
||||||
|
|
||||||
override fun channelRead0(ctx: ChannelHandlerContext, msg: CacheMessage) {
|
override fun channelRead0(ctx: ChannelHandlerContext, msg: CacheMessage) {
|
||||||
when (msg) {
|
when (msg) {
|
||||||
@@ -85,24 +83,11 @@ class InMemoryCacheHandler(
|
|||||||
}
|
}
|
||||||
|
|
||||||
private fun handleGetRequest(ctx: ChannelHandlerContext, msg: CacheGetRequest) {
|
private fun handleGetRequest(ctx: ChannelHandlerContext, msg: CacheGetRequest) {
|
||||||
cache.get(processCacheKey(msg.key, digestAlgorithm))?.let { value ->
|
inProgressRequest = InProgressGetRequest(msg)
|
||||||
ctx.writeAndFlush(CacheValueFoundResponse(msg.key, value.metadata))
|
|
||||||
if (compressionEnabled) {
|
|
||||||
val buf = ctx.alloc().heapBuffer()
|
|
||||||
InflaterOutputStream(ByteBufOutputStream(buf)).use {
|
|
||||||
value.content.readBytes(it, value.content.readableBytes())
|
|
||||||
value.content.release()
|
|
||||||
buf.retain()
|
|
||||||
}
|
|
||||||
ctx.writeAndFlush(LastCacheContent(buf))
|
|
||||||
} else {
|
|
||||||
ctx.writeAndFlush(LastCacheContent(value.content))
|
|
||||||
}
|
|
||||||
} ?: ctx.writeAndFlush(CacheValueNotFoundResponse())
|
|
||||||
}
|
}
|
||||||
|
|
||||||
private fun handlePutRequest(ctx: ChannelHandlerContext, msg: CachePutRequest) {
|
private fun handlePutRequest(ctx: ChannelHandlerContext, msg: CachePutRequest) {
|
||||||
inProgressPutRequest = if(compressionEnabled) {
|
inProgressRequest = if(compressionEnabled) {
|
||||||
InProgressCompressedPutRequest(ctx, msg)
|
InProgressCompressedPutRequest(ctx, msg)
|
||||||
} else {
|
} else {
|
||||||
InProgressPlainPutRequest(ctx, msg)
|
InProgressPlainPutRequest(ctx, msg)
|
||||||
@@ -110,27 +95,46 @@ class InMemoryCacheHandler(
|
|||||||
}
|
}
|
||||||
|
|
||||||
private fun handleCacheContent(ctx: ChannelHandlerContext, msg: CacheContent) {
|
private fun handleCacheContent(ctx: ChannelHandlerContext, msg: CacheContent) {
|
||||||
inProgressPutRequest?.append(msg.content())
|
val req = inProgressRequest
|
||||||
|
if(req is InProgressPutRequest) {
|
||||||
|
req.append(msg.content())
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private fun handleLastCacheContent(ctx: ChannelHandlerContext, msg: LastCacheContent) {
|
private fun handleLastCacheContent(ctx: ChannelHandlerContext, msg: LastCacheContent) {
|
||||||
handleCacheContent(ctx, msg)
|
handleCacheContent(ctx, msg)
|
||||||
inProgressPutRequest?.let { inProgressRequest ->
|
when(val req = inProgressRequest) {
|
||||||
inProgressPutRequest = null
|
is InProgressGetRequest -> {
|
||||||
val buf = inProgressRequest.buf
|
cache.get(processCacheKey(req.request.key, digestAlgorithm))?.let { value ->
|
||||||
buf.retain()
|
sendMessageAndFlush(ctx, CacheValueFoundResponse(req.request.key, value.metadata))
|
||||||
inProgressRequest.close()
|
if (compressionEnabled) {
|
||||||
val cacheKey = processCacheKey(inProgressRequest.request.key, digestAlgorithm)
|
val buf = ctx.alloc().heapBuffer()
|
||||||
cache.put(cacheKey, CacheEntry(inProgressRequest.request.metadata, buf))
|
InflaterOutputStream(ByteBufOutputStream(buf)).use {
|
||||||
ctx.writeAndFlush(CachePutResponse(inProgressRequest.request.key))
|
value.content.readBytes(it, value.content.readableBytes())
|
||||||
|
value.content.release()
|
||||||
|
buf.retain()
|
||||||
|
}
|
||||||
|
sendMessage(ctx, LastCacheContent(buf))
|
||||||
|
} else {
|
||||||
|
sendMessage(ctx, LastCacheContent(value.content))
|
||||||
|
}
|
||||||
|
} ?: sendMessage(ctx, CacheValueNotFoundResponse())
|
||||||
|
}
|
||||||
|
is InProgressPutRequest -> {
|
||||||
|
this.inProgressRequest = null
|
||||||
|
val buf = req.buf
|
||||||
|
buf.retain()
|
||||||
|
req.close()
|
||||||
|
val cacheKey = processCacheKey(req.request.key, digestAlgorithm)
|
||||||
|
cache.put(cacheKey, CacheEntry(req.request.metadata, buf))
|
||||||
|
sendMessageAndFlush(ctx, CachePutResponse(req.request.key))
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
||||||
inProgressPutRequest?.let { req ->
|
inProgressRequest?.close()
|
||||||
req.buf.release()
|
inProgressRequest = null
|
||||||
inProgressPutRequest = null
|
|
||||||
}
|
|
||||||
super.exceptionCaught(ctx, cause)
|
super.exceptionCaught(ctx, cause)
|
||||||
}
|
}
|
||||||
}
|
}
|
@@ -31,16 +31,12 @@ class InMemoryCacheProvider : CacheProvider<InMemoryCacheConfiguration> {
|
|||||||
?.let(String::toInt)
|
?.let(String::toInt)
|
||||||
?: Deflater.DEFAULT_COMPRESSION
|
?: Deflater.DEFAULT_COMPRESSION
|
||||||
val digestAlgorithm = el.renderAttribute("digest")
|
val digestAlgorithm = el.renderAttribute("digest")
|
||||||
val chunkSize = el.renderAttribute("chunk-size")
|
|
||||||
?.let(Integer::decode)
|
|
||||||
?: 0x10000
|
|
||||||
return InMemoryCacheConfiguration(
|
return InMemoryCacheConfiguration(
|
||||||
maxAge,
|
maxAge,
|
||||||
maxSize,
|
maxSize,
|
||||||
digestAlgorithm,
|
digestAlgorithm,
|
||||||
enableCompression,
|
enableCompression,
|
||||||
compressionLevel,
|
compressionLevel,
|
||||||
chunkSize
|
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -60,7 +56,6 @@ class InMemoryCacheProvider : CacheProvider<InMemoryCacheConfiguration> {
|
|||||||
}?.let {
|
}?.let {
|
||||||
attr("compression-level", it.toString())
|
attr("compression-level", it.toString())
|
||||||
}
|
}
|
||||||
attr("chunk-size", chunkSize.toString())
|
|
||||||
}
|
}
|
||||||
result
|
result
|
||||||
}
|
}
|
||||||
|
@@ -27,10 +27,11 @@ object Parser {
|
|||||||
val root = document.documentElement
|
val root = document.documentElement
|
||||||
val anonymousUser = User("", null, emptySet(), null)
|
val anonymousUser = User("", null, emptySet(), null)
|
||||||
var connection: Configuration.Connection = Configuration.Connection(
|
var connection: Configuration.Connection = Configuration.Connection(
|
||||||
|
Duration.of(30, ChronoUnit.SECONDS),
|
||||||
Duration.of(60, ChronoUnit.SECONDS),
|
Duration.of(60, ChronoUnit.SECONDS),
|
||||||
Duration.of(30, ChronoUnit.SECONDS),
|
Duration.of(60, ChronoUnit.SECONDS),
|
||||||
Duration.of(30, ChronoUnit.SECONDS),
|
0x4000000,
|
||||||
67108864
|
0x10000
|
||||||
)
|
)
|
||||||
var eventExecutor: Configuration.EventExecutor = Configuration.EventExecutor(true)
|
var eventExecutor: Configuration.EventExecutor = Configuration.EventExecutor(true)
|
||||||
var cache: Cache? = null
|
var cache: Cache? = null
|
||||||
@@ -119,11 +120,14 @@ object Parser {
|
|||||||
?.let(Duration::parse) ?: Duration.of(60, ChronoUnit.SECONDS)
|
?.let(Duration::parse) ?: Duration.of(60, ChronoUnit.SECONDS)
|
||||||
val maxRequestSize = child.renderAttribute("max-request-size")
|
val maxRequestSize = child.renderAttribute("max-request-size")
|
||||||
?.let(Integer::decode) ?: 0x4000000
|
?.let(Integer::decode) ?: 0x4000000
|
||||||
|
val chunkSize = child.renderAttribute("chunk-size")
|
||||||
|
?.let(Integer::decode) ?: 0x10000
|
||||||
connection = Configuration.Connection(
|
connection = Configuration.Connection(
|
||||||
idleTimeout,
|
idleTimeout,
|
||||||
readIdleTimeout,
|
readIdleTimeout,
|
||||||
writeIdleTimeout,
|
writeIdleTimeout,
|
||||||
maxRequestSize
|
maxRequestSize,
|
||||||
|
chunkSize
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@@ -40,6 +40,7 @@ object Serializer {
|
|||||||
attr("read-idle-timeout", connection.readIdleTimeout.toString())
|
attr("read-idle-timeout", connection.readIdleTimeout.toString())
|
||||||
attr("write-idle-timeout", connection.writeIdleTimeout.toString())
|
attr("write-idle-timeout", connection.writeIdleTimeout.toString())
|
||||||
attr("max-request-size", connection.maxRequestSize.toString())
|
attr("max-request-size", connection.maxRequestSize.toString())
|
||||||
|
attr("chunk-size", connection.chunkSize.toString())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
node("event-executor") {
|
node("event-executor") {
|
||||||
|
@@ -27,6 +27,9 @@ import javax.net.ssl.SSLPeerUnverifiedException
|
|||||||
|
|
||||||
@Sharable
|
@Sharable
|
||||||
object ExceptionHandler : ChannelDuplexHandler() {
|
object ExceptionHandler : ChannelDuplexHandler() {
|
||||||
|
|
||||||
|
val NAME : String = this::class.java.name
|
||||||
|
|
||||||
private val log = contextLogger()
|
private val log = contextLogger()
|
||||||
|
|
||||||
private val NOT_AUTHORIZED: FullHttpResponse = DefaultFullHttpResponse(
|
private val NOT_AUTHORIZED: FullHttpResponse = DefaultFullHttpResponse(
|
||||||
|
@@ -0,0 +1,13 @@
|
|||||||
|
package net.woggioni.rbcs.server.handler
|
||||||
|
|
||||||
|
import io.netty.channel.ChannelHandlerContext
|
||||||
|
import io.netty.channel.SimpleChannelInboundHandler
|
||||||
|
import io.netty.handler.codec.http.HttpContent
|
||||||
|
|
||||||
|
class BlackHoleRequestHandler : SimpleChannelInboundHandler<HttpContent>() {
|
||||||
|
companion object {
|
||||||
|
val NAME = BlackHoleRequestHandler::class.java.name
|
||||||
|
}
|
||||||
|
override fun channelRead0(ctx: ChannelHandlerContext, msg: HttpContent) {
|
||||||
|
}
|
||||||
|
}
|
@@ -1,28 +0,0 @@
|
|||||||
package net.woggioni.rbcs.server.handler
|
|
||||||
|
|
||||||
import io.netty.channel.ChannelHandler.Sharable
|
|
||||||
import io.netty.channel.ChannelHandlerContext
|
|
||||||
import io.netty.channel.SimpleChannelInboundHandler
|
|
||||||
import io.netty.handler.codec.http.HttpContent
|
|
||||||
import io.netty.handler.codec.http.LastHttpContent
|
|
||||||
import net.woggioni.rbcs.api.message.CacheMessage.CacheContent
|
|
||||||
import net.woggioni.rbcs.api.message.CacheMessage.LastCacheContent
|
|
||||||
|
|
||||||
@Sharable
|
|
||||||
object CacheContentHandler : SimpleChannelInboundHandler<HttpContent>() {
|
|
||||||
val NAME = this::class.java.name
|
|
||||||
|
|
||||||
override fun channelRead0(ctx: ChannelHandlerContext, msg: HttpContent) {
|
|
||||||
when(msg) {
|
|
||||||
is LastHttpContent -> {
|
|
||||||
ctx.fireChannelRead(LastCacheContent(msg.content().retain()))
|
|
||||||
ctx.pipeline().remove(this)
|
|
||||||
}
|
|
||||||
else -> ctx.fireChannelRead(CacheContent(msg.content().retain()))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
override fun exceptionCaught(ctx: ChannelHandlerContext?, cause: Throwable?) {
|
|
||||||
super.exceptionCaught(ctx, cause)
|
|
||||||
}
|
|
||||||
}
|
|
@@ -1,12 +1,14 @@
|
|||||||
package net.woggioni.rbcs.server.handler
|
package net.woggioni.rbcs.server.handler
|
||||||
|
|
||||||
import io.netty.channel.ChannelDuplexHandler
|
import io.netty.channel.ChannelDuplexHandler
|
||||||
|
import io.netty.channel.ChannelHandler
|
||||||
import io.netty.channel.ChannelHandlerContext
|
import io.netty.channel.ChannelHandlerContext
|
||||||
import io.netty.channel.ChannelPromise
|
import io.netty.channel.ChannelPromise
|
||||||
import io.netty.handler.codec.http.DefaultFullHttpResponse
|
import io.netty.handler.codec.http.DefaultFullHttpResponse
|
||||||
import io.netty.handler.codec.http.DefaultHttpContent
|
import io.netty.handler.codec.http.DefaultHttpContent
|
||||||
import io.netty.handler.codec.http.DefaultHttpResponse
|
import io.netty.handler.codec.http.DefaultHttpResponse
|
||||||
import io.netty.handler.codec.http.DefaultLastHttpContent
|
import io.netty.handler.codec.http.DefaultLastHttpContent
|
||||||
|
import io.netty.handler.codec.http.HttpContent
|
||||||
import io.netty.handler.codec.http.HttpHeaderNames
|
import io.netty.handler.codec.http.HttpHeaderNames
|
||||||
import io.netty.handler.codec.http.HttpHeaderValues
|
import io.netty.handler.codec.http.HttpHeaderValues
|
||||||
import io.netty.handler.codec.http.HttpHeaders
|
import io.netty.handler.codec.http.HttpHeaders
|
||||||
@@ -15,6 +17,7 @@ import io.netty.handler.codec.http.HttpRequest
|
|||||||
import io.netty.handler.codec.http.HttpResponseStatus
|
import io.netty.handler.codec.http.HttpResponseStatus
|
||||||
import io.netty.handler.codec.http.HttpUtil
|
import io.netty.handler.codec.http.HttpUtil
|
||||||
import io.netty.handler.codec.http.HttpVersion
|
import io.netty.handler.codec.http.HttpVersion
|
||||||
|
import io.netty.handler.codec.http.LastHttpContent
|
||||||
import net.woggioni.rbcs.api.CacheValueMetadata
|
import net.woggioni.rbcs.api.CacheValueMetadata
|
||||||
import net.woggioni.rbcs.api.message.CacheMessage
|
import net.woggioni.rbcs.api.message.CacheMessage
|
||||||
import net.woggioni.rbcs.api.message.CacheMessage.CacheContent
|
import net.woggioni.rbcs.api.message.CacheMessage.CacheContent
|
||||||
@@ -27,19 +30,29 @@ import net.woggioni.rbcs.api.message.CacheMessage.LastCacheContent
|
|||||||
import net.woggioni.rbcs.common.createLogger
|
import net.woggioni.rbcs.common.createLogger
|
||||||
import net.woggioni.rbcs.common.debug
|
import net.woggioni.rbcs.common.debug
|
||||||
import net.woggioni.rbcs.common.warn
|
import net.woggioni.rbcs.common.warn
|
||||||
|
import net.woggioni.rbcs.server.exception.ExceptionHandler
|
||||||
import java.nio.file.Path
|
import java.nio.file.Path
|
||||||
import java.util.Locale
|
|
||||||
|
|
||||||
class ServerHandler(private val serverPrefix: Path) :
|
class ServerHandler(private val serverPrefix: Path, private val cacheHandlerSupplier : () -> ChannelHandler) :
|
||||||
ChannelDuplexHandler() {
|
ChannelDuplexHandler() {
|
||||||
|
|
||||||
companion object {
|
companion object {
|
||||||
private val log = createLogger<ServerHandler>()
|
private val log = createLogger<ServerHandler>()
|
||||||
val NAME = this::class.java.name
|
val NAME = ServerHandler::class.java.name
|
||||||
}
|
}
|
||||||
|
|
||||||
private var httpVersion = HttpVersion.HTTP_1_1
|
private var httpVersion = HttpVersion.HTTP_1_1
|
||||||
private var keepAlive = true
|
private var keepAlive = true
|
||||||
|
private var pipelinedRequests = 0
|
||||||
|
|
||||||
|
private fun newRequest() {
|
||||||
|
pipelinedRequests += 1
|
||||||
|
}
|
||||||
|
|
||||||
|
private fun requestCompleted(ctx : ChannelHandlerContext) {
|
||||||
|
pipelinedRequests -= 1
|
||||||
|
if(pipelinedRequests == 0) ctx.read()
|
||||||
|
}
|
||||||
|
|
||||||
private fun resetRequestMetadata() {
|
private fun resetRequestMetadata() {
|
||||||
httpVersion = HttpVersion.HTTP_1_1
|
httpVersion = HttpVersion.HTTP_1_1
|
||||||
@@ -59,14 +72,38 @@ class ServerHandler(private val serverPrefix: Path) :
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private var cacheRequestInProgress : Boolean = false
|
||||||
|
|
||||||
|
override fun handlerAdded(ctx: ChannelHandlerContext) {
|
||||||
|
ctx.read()
|
||||||
|
}
|
||||||
|
|
||||||
override fun channelRead(ctx: ChannelHandlerContext, msg: Any) {
|
override fun channelRead(ctx: ChannelHandlerContext, msg: Any) {
|
||||||
when (msg) {
|
when (msg) {
|
||||||
is HttpRequest -> handleRequest(ctx, msg)
|
is HttpRequest -> handleRequest(ctx, msg)
|
||||||
|
is HttpContent -> {
|
||||||
|
if(cacheRequestInProgress) {
|
||||||
|
if(msg is LastHttpContent) {
|
||||||
|
super.channelRead(ctx, LastCacheContent(msg.content().retain()))
|
||||||
|
cacheRequestInProgress = false
|
||||||
|
} else {
|
||||||
|
super.channelRead(ctx, CacheContent(msg.content().retain()))
|
||||||
|
}
|
||||||
|
msg.release()
|
||||||
|
} else {
|
||||||
|
super.channelRead(ctx, msg)
|
||||||
|
}
|
||||||
|
}
|
||||||
else -> super.channelRead(ctx, msg)
|
else -> super.channelRead(ctx, msg)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
override fun channelReadComplete(ctx: ChannelHandlerContext) {
|
||||||
|
super.channelReadComplete(ctx)
|
||||||
|
if(cacheRequestInProgress) {
|
||||||
|
ctx.read()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
override fun write(ctx: ChannelHandlerContext, msg: Any, promise: ChannelPromise?) {
|
override fun write(ctx: ChannelHandlerContext, msg: Any, promise: ChannelPromise?) {
|
||||||
if (msg is CacheMessage) {
|
if (msg is CacheMessage) {
|
||||||
@@ -84,14 +121,18 @@ class ServerHandler(private val serverPrefix: Path) :
|
|||||||
val buf = ctx.alloc().buffer(keyBytes.size).apply {
|
val buf = ctx.alloc().buffer(keyBytes.size).apply {
|
||||||
writeBytes(keyBytes)
|
writeBytes(keyBytes)
|
||||||
}
|
}
|
||||||
ctx.writeAndFlush(DefaultLastHttpContent(buf))
|
ctx.writeAndFlush(DefaultLastHttpContent(buf)).also {
|
||||||
|
requestCompleted(ctx)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
is CacheValueNotFoundResponse -> {
|
is CacheValueNotFoundResponse -> {
|
||||||
val response = DefaultFullHttpResponse(httpVersion, HttpResponseStatus.NOT_FOUND)
|
val response = DefaultFullHttpResponse(httpVersion, HttpResponseStatus.NOT_FOUND)
|
||||||
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = 0
|
response.headers()[HttpHeaderNames.CONTENT_LENGTH] = 0
|
||||||
setKeepAliveHeader(response.headers())
|
setKeepAliveHeader(response.headers())
|
||||||
ctx.writeAndFlush(response)
|
ctx.writeAndFlush(response).also {
|
||||||
|
requestCompleted(ctx)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
is CacheValueFoundResponse -> {
|
is CacheValueFoundResponse -> {
|
||||||
@@ -108,7 +149,9 @@ class ServerHandler(private val serverPrefix: Path) :
|
|||||||
}
|
}
|
||||||
|
|
||||||
is LastCacheContent -> {
|
is LastCacheContent -> {
|
||||||
ctx.writeAndFlush(DefaultLastHttpContent(msg.content()))
|
ctx.writeAndFlush(DefaultLastHttpContent(msg.content())).also {
|
||||||
|
requestCompleted(ctx)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
is CacheContent -> {
|
is CacheContent -> {
|
||||||
@@ -127,6 +170,9 @@ class ServerHandler(private val serverPrefix: Path) :
|
|||||||
} finally {
|
} finally {
|
||||||
resetRequestMetadata()
|
resetRequestMetadata()
|
||||||
}
|
}
|
||||||
|
} else if(msg is LastHttpContent) {
|
||||||
|
ctx.write(msg, promise)
|
||||||
|
requestCompleted(ctx)
|
||||||
} else super.write(ctx, msg, promise)
|
} else super.write(ctx, msg, promise)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -137,9 +183,12 @@ class ServerHandler(private val serverPrefix: Path) :
|
|||||||
if (method === HttpMethod.GET) {
|
if (method === HttpMethod.GET) {
|
||||||
val path = Path.of(msg.uri()).normalize()
|
val path = Path.of(msg.uri()).normalize()
|
||||||
if (path.startsWith(serverPrefix)) {
|
if (path.startsWith(serverPrefix)) {
|
||||||
|
cacheRequestInProgress = true
|
||||||
val relativePath = serverPrefix.relativize(path)
|
val relativePath = serverPrefix.relativize(path)
|
||||||
val key = relativePath.toString()
|
val key : String = relativePath.toString()
|
||||||
ctx.pipeline().addAfter(NAME, CacheContentHandler.NAME, CacheContentHandler)
|
newRequest()
|
||||||
|
val cacheHandler = cacheHandlerSupplier()
|
||||||
|
ctx.pipeline().addBefore(ExceptionHandler.NAME, null, cacheHandler)
|
||||||
key.let(::CacheGetRequest)
|
key.let(::CacheGetRequest)
|
||||||
.let(ctx::fireChannelRead)
|
.let(ctx::fireChannelRead)
|
||||||
?: ctx.channel().write(CacheValueNotFoundResponse())
|
?: ctx.channel().write(CacheValueNotFoundResponse())
|
||||||
@@ -154,12 +203,16 @@ class ServerHandler(private val serverPrefix: Path) :
|
|||||||
} else if (method === HttpMethod.PUT) {
|
} else if (method === HttpMethod.PUT) {
|
||||||
val path = Path.of(msg.uri()).normalize()
|
val path = Path.of(msg.uri()).normalize()
|
||||||
if (path.startsWith(serverPrefix)) {
|
if (path.startsWith(serverPrefix)) {
|
||||||
|
cacheRequestInProgress = true
|
||||||
val relativePath = serverPrefix.relativize(path)
|
val relativePath = serverPrefix.relativize(path)
|
||||||
val key = relativePath.toString()
|
val key = relativePath.toString()
|
||||||
log.debug(ctx) {
|
log.debug(ctx) {
|
||||||
"Added value for key '$key' to build cache"
|
"Added value for key '$key' to build cache"
|
||||||
}
|
}
|
||||||
ctx.pipeline().addAfter(NAME, CacheContentHandler.NAME, CacheContentHandler)
|
newRequest()
|
||||||
|
val cacheHandler = cacheHandlerSupplier()
|
||||||
|
ctx.pipeline().addBefore(ExceptionHandler.NAME, null, cacheHandler)
|
||||||
|
|
||||||
path.fileName?.toString()
|
path.fileName?.toString()
|
||||||
?.let {
|
?.let {
|
||||||
val mimeType = HttpUtil.getMimeType(msg)?.toString()
|
val mimeType = HttpUtil.getMimeType(msg)?.toString()
|
||||||
@@ -176,6 +229,8 @@ class ServerHandler(private val serverPrefix: Path) :
|
|||||||
ctx.writeAndFlush(response)
|
ctx.writeAndFlush(response)
|
||||||
}
|
}
|
||||||
} else if (method == HttpMethod.TRACE) {
|
} else if (method == HttpMethod.TRACE) {
|
||||||
|
newRequest()
|
||||||
|
ctx.pipeline().addBefore(ExceptionHandler.NAME, null, TraceHandler)
|
||||||
super.channelRead(ctx, msg)
|
super.channelRead(ctx, msg)
|
||||||
} else {
|
} else {
|
||||||
log.warn(ctx) {
|
log.warn(ctx) {
|
||||||
@@ -187,42 +242,6 @@ class ServerHandler(private val serverPrefix: Path) :
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
data class ContentDisposition(val type: Type?, val fileName: String?) {
|
|
||||||
enum class Type {
|
|
||||||
attachment, `inline`;
|
|
||||||
|
|
||||||
companion object {
|
|
||||||
@JvmStatic
|
|
||||||
fun parse(maybeString: String?) = maybeString.let { s ->
|
|
||||||
try {
|
|
||||||
java.lang.Enum.valueOf(Type::class.java, s)
|
|
||||||
} catch (ex: IllegalArgumentException) {
|
|
||||||
null
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
companion object {
|
|
||||||
@JvmStatic
|
|
||||||
fun parse(contentDisposition: String) : ContentDisposition {
|
|
||||||
val parts = contentDisposition.split(";").dropLastWhile { it.isEmpty() }.toTypedArray()
|
|
||||||
val dispositionType = parts[0].trim { it <= ' ' }.let(Type::parse) // Get the type (e.g., attachment)
|
|
||||||
|
|
||||||
var filename: String? = null
|
|
||||||
for (i in 1..<parts.size) {
|
|
||||||
val part = parts[i].trim { it <= ' ' }
|
|
||||||
if (part.lowercase(Locale.getDefault()).startsWith("filename=")) {
|
|
||||||
filename = part.substring("filename=".length).trim { it <= ' ' }.replace("\"", "")
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return ContentDisposition(dispositionType, filename)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
override fun exceptionCaught(ctx: ChannelHandlerContext, cause: Throwable) {
|
||||||
super.exceptionCaught(ctx, cause)
|
super.exceptionCaught(ctx, cause)
|
||||||
}
|
}
|
||||||
|
@@ -42,6 +42,7 @@ object TraceHandler : ChannelInboundHandlerAdapter() {
|
|||||||
}
|
}
|
||||||
is LastHttpContent -> {
|
is LastHttpContent -> {
|
||||||
ctx.writeAndFlush(msg)
|
ctx.writeAndFlush(msg)
|
||||||
|
ctx.pipeline().remove(this)
|
||||||
}
|
}
|
||||||
is HttpContent -> ctx.writeAndFlush(msg)
|
is HttpContent -> ctx.writeAndFlush(msg)
|
||||||
else -> super.channelRead(ctx, msg)
|
else -> super.channelRead(ctx, msg)
|
||||||
|
@@ -94,6 +94,9 @@ class ThrottlingHandler(private val bucketManager : BucketManager,
|
|||||||
handleBuckets(buckets, ctx, msg, false)
|
handleBuckets(buckets, ctx, msg, false)
|
||||||
}, waitDuration.toMillis(), TimeUnit.MILLISECONDS)
|
}, waitDuration.toMillis(), TimeUnit.MILLISECONDS)
|
||||||
} else {
|
} else {
|
||||||
|
queuedContent?.let { qc ->
|
||||||
|
qc.forEach { it.release() }
|
||||||
|
}
|
||||||
this.queuedContent = null
|
this.queuedContent = null
|
||||||
sendThrottledResponse(ctx, waitDuration)
|
sendThrottledResponse(ctx, waitDuration)
|
||||||
}
|
}
|
||||||
|
@@ -115,6 +115,14 @@
|
|||||||
</xs:documentation>
|
</xs:documentation>
|
||||||
</xs:annotation>
|
</xs:annotation>
|
||||||
</xs:attribute>
|
</xs:attribute>
|
||||||
|
<xs:attribute name="chunk-size" type="rbcs:byteSizeType" default="0x10000">
|
||||||
|
<xs:annotation>
|
||||||
|
<xs:documentation>
|
||||||
|
Maximum byte size of socket write calls
|
||||||
|
(reduce it to reduce memory consumption, increase it for increased throughput)
|
||||||
|
</xs:documentation>
|
||||||
|
</xs:annotation>
|
||||||
|
</xs:attribute>
|
||||||
</xs:complexType>
|
</xs:complexType>
|
||||||
|
|
||||||
<xs:complexType name="eventExecutorType">
|
<xs:complexType name="eventExecutorType">
|
||||||
@@ -175,13 +183,6 @@
|
|||||||
</xs:documentation>
|
</xs:documentation>
|
||||||
</xs:annotation>
|
</xs:annotation>
|
||||||
</xs:attribute>
|
</xs:attribute>
|
||||||
<xs:attribute name="chunk-size" type="rbcs:byteSizeType" default="0x10000">
|
|
||||||
<xs:annotation>
|
|
||||||
<xs:documentation>
|
|
||||||
Maximum byte size of socket write calls
|
|
||||||
</xs:documentation>
|
|
||||||
</xs:annotation>
|
|
||||||
</xs:attribute>
|
|
||||||
</xs:extension>
|
</xs:extension>
|
||||||
</xs:complexContent>
|
</xs:complexContent>
|
||||||
</xs:complexType>
|
</xs:complexType>
|
||||||
@@ -231,14 +232,6 @@
|
|||||||
</xs:documentation>
|
</xs:documentation>
|
||||||
</xs:annotation>
|
</xs:annotation>
|
||||||
</xs:attribute>
|
</xs:attribute>
|
||||||
<xs:attribute name="chunk-size" type="rbcs:byteSizeType" default="0x10000">
|
|
||||||
<xs:annotation>
|
|
||||||
<xs:documentation>
|
|
||||||
Maximum byte size of a cache value that will be stored in memory
|
|
||||||
(reduce it to reduce memory consumption, increase it for increased throughput)
|
|
||||||
</xs:documentation>
|
|
||||||
</xs:annotation>
|
|
||||||
</xs:attribute>
|
|
||||||
</xs:extension>
|
</xs:extension>
|
||||||
</xs:complexContent>
|
</xs:complexContent>
|
||||||
</xs:complexType>
|
</xs:complexType>
|
||||||
|
@@ -1,5 +1,14 @@
|
|||||||
package net.woggioni.rbcs.server.test.utils;
|
package net.woggioni.rbcs.server.test.utils;
|
||||||
|
|
||||||
|
import java.math.BigInteger;
|
||||||
|
import java.security.KeyPair;
|
||||||
|
import java.security.KeyPairGenerator;
|
||||||
|
import java.security.PrivateKey;
|
||||||
|
import java.security.SecureRandom;
|
||||||
|
import java.security.cert.X509Certificate;
|
||||||
|
import java.time.Instant;
|
||||||
|
import java.time.temporal.ChronoUnit;
|
||||||
|
import java.util.Date;
|
||||||
import org.bouncycastle.asn1.DERSequence;
|
import org.bouncycastle.asn1.DERSequence;
|
||||||
import org.bouncycastle.asn1.x500.X500Name;
|
import org.bouncycastle.asn1.x500.X500Name;
|
||||||
import org.bouncycastle.asn1.x509.BasicConstraints;
|
import org.bouncycastle.asn1.x509.BasicConstraints;
|
||||||
@@ -15,16 +24,6 @@ import org.bouncycastle.cert.jcajce.JcaX509v3CertificateBuilder;
|
|||||||
import org.bouncycastle.operator.ContentSigner;
|
import org.bouncycastle.operator.ContentSigner;
|
||||||
import org.bouncycastle.operator.jcajce.JcaContentSignerBuilder;
|
import org.bouncycastle.operator.jcajce.JcaContentSignerBuilder;
|
||||||
|
|
||||||
import java.math.BigInteger;
|
|
||||||
import java.security.KeyPair;
|
|
||||||
import java.security.KeyPairGenerator;
|
|
||||||
import java.security.PrivateKey;
|
|
||||||
import java.security.SecureRandom;
|
|
||||||
import java.security.cert.X509Certificate;
|
|
||||||
import java.time.Instant;
|
|
||||||
import java.time.temporal.ChronoUnit;
|
|
||||||
import java.util.Date;
|
|
||||||
|
|
||||||
public class CertificateUtils {
|
public class CertificateUtils {
|
||||||
|
|
||||||
public record X509Credentials(
|
public record X509Credentials(
|
||||||
|
@@ -41,7 +41,8 @@ abstract class AbstractBasicAuthServerTest : AbstractServerTest() {
|
|||||||
Duration.of(60, ChronoUnit.SECONDS),
|
Duration.of(60, ChronoUnit.SECONDS),
|
||||||
Duration.of(30, ChronoUnit.SECONDS),
|
Duration.of(30, ChronoUnit.SECONDS),
|
||||||
Duration.of(30, ChronoUnit.SECONDS),
|
Duration.of(30, ChronoUnit.SECONDS),
|
||||||
0x1000
|
0x1000,
|
||||||
|
0x10000
|
||||||
),
|
),
|
||||||
users.asSequence().map { it.name to it}.toMap(),
|
users.asSequence().map { it.name to it}.toMap(),
|
||||||
sequenceOf(writersGroup, readersGroup).map { it.name to it}.toMap(),
|
sequenceOf(writersGroup, readersGroup).map { it.name to it}.toMap(),
|
||||||
@@ -50,8 +51,7 @@ abstract class AbstractBasicAuthServerTest : AbstractServerTest() {
|
|||||||
maxAge = Duration.ofSeconds(3600 * 24),
|
maxAge = Duration.ofSeconds(3600 * 24),
|
||||||
digestAlgorithm = "MD5",
|
digestAlgorithm = "MD5",
|
||||||
compressionLevel = Deflater.DEFAULT_COMPRESSION,
|
compressionLevel = Deflater.DEFAULT_COMPRESSION,
|
||||||
compressionEnabled = false,
|
compressionEnabled = false
|
||||||
chunkSize = 0x1000
|
|
||||||
),
|
),
|
||||||
Configuration.BasicAuthentication(),
|
Configuration.BasicAuthentication(),
|
||||||
null,
|
null,
|
||||||
|
@@ -147,7 +147,8 @@ abstract class AbstractTlsServerTest : AbstractServerTest() {
|
|||||||
Duration.of(60, ChronoUnit.SECONDS),
|
Duration.of(60, ChronoUnit.SECONDS),
|
||||||
Duration.of(30, ChronoUnit.SECONDS),
|
Duration.of(30, ChronoUnit.SECONDS),
|
||||||
Duration.of(30, ChronoUnit.SECONDS),
|
Duration.of(30, ChronoUnit.SECONDS),
|
||||||
0x1000
|
0x1000,
|
||||||
|
0x10000
|
||||||
),
|
),
|
||||||
users.asSequence().map { it.name to it }.toMap(),
|
users.asSequence().map { it.name to it }.toMap(),
|
||||||
sequenceOf(writersGroup, readersGroup).map { it.name to it }.toMap(),
|
sequenceOf(writersGroup, readersGroup).map { it.name to it }.toMap(),
|
||||||
@@ -156,7 +157,6 @@ abstract class AbstractTlsServerTest : AbstractServerTest() {
|
|||||||
compressionEnabled = false,
|
compressionEnabled = false,
|
||||||
compressionLevel = Deflater.DEFAULT_COMPRESSION,
|
compressionLevel = Deflater.DEFAULT_COMPRESSION,
|
||||||
digestAlgorithm = "MD5",
|
digestAlgorithm = "MD5",
|
||||||
chunkSize = 0x1000
|
|
||||||
),
|
),
|
||||||
// InMemoryCacheConfiguration(
|
// InMemoryCacheConfiguration(
|
||||||
// maxAge = Duration.ofSeconds(3600 * 24),
|
// maxAge = Duration.ofSeconds(3600 * 24),
|
||||||
|
@@ -154,7 +154,7 @@ class BasicAuthServerTest : AbstractBasicAuthServerTest() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
@Order(6)
|
@Order(8)
|
||||||
fun getAsAThrottledUser() {
|
fun getAsAThrottledUser() {
|
||||||
val client: HttpClient = HttpClient.newHttpClient()
|
val client: HttpClient = HttpClient.newHttpClient()
|
||||||
|
|
||||||
@@ -172,7 +172,7 @@ class BasicAuthServerTest : AbstractBasicAuthServerTest() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
@Order(7)
|
@Order(9)
|
||||||
fun getAsAThrottledUser2() {
|
fun getAsAThrottledUser2() {
|
||||||
val client: HttpClient = HttpClient.newHttpClient()
|
val client: HttpClient = HttpClient.newHttpClient()
|
||||||
|
|
||||||
|
@@ -41,7 +41,8 @@ class NoAuthServerTest : AbstractServerTest() {
|
|||||||
Duration.of(60, ChronoUnit.SECONDS),
|
Duration.of(60, ChronoUnit.SECONDS),
|
||||||
Duration.of(30, ChronoUnit.SECONDS),
|
Duration.of(30, ChronoUnit.SECONDS),
|
||||||
Duration.of(30, ChronoUnit.SECONDS),
|
Duration.of(30, ChronoUnit.SECONDS),
|
||||||
0x1000
|
0x1000,
|
||||||
|
0x10000
|
||||||
),
|
),
|
||||||
emptyMap(),
|
emptyMap(),
|
||||||
emptyMap(),
|
emptyMap(),
|
||||||
@@ -51,7 +52,6 @@ class NoAuthServerTest : AbstractServerTest() {
|
|||||||
digestAlgorithm = "MD5",
|
digestAlgorithm = "MD5",
|
||||||
compressionLevel = Deflater.DEFAULT_COMPRESSION,
|
compressionLevel = Deflater.DEFAULT_COMPRESSION,
|
||||||
maxSize = 0x1000000,
|
maxSize = 0x1000000,
|
||||||
chunkSize = 0x1000
|
|
||||||
),
|
),
|
||||||
null,
|
null,
|
||||||
null,
|
null,
|
||||||
|
@@ -166,4 +166,17 @@ class TlsServerTest : AbstractTlsServerTest() {
|
|||||||
Assertions.assertEquals(HttpResponseStatus.OK.code(), response.statusCode())
|
Assertions.assertEquals(HttpResponseStatus.OK.code(), response.statusCode())
|
||||||
println(String(response.body()))
|
println(String(response.body()))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
@Order(10)
|
||||||
|
fun putAsUnknownUserUser() {
|
||||||
|
val (key, value) = keyValuePair
|
||||||
|
val client: HttpClient = getHttpClient(getClientKeyStore(ca, X500Name("CN=Unknown user")))
|
||||||
|
val requestBuilder = newRequestBuilder(key)
|
||||||
|
.header("Content-Type", "application/octet-stream")
|
||||||
|
.PUT(HttpRequest.BodyPublishers.ofByteArray(value))
|
||||||
|
|
||||||
|
val response: HttpResponse<String> = client.send(requestBuilder.build(), HttpResponse.BodyHandlers.ofString())
|
||||||
|
Assertions.assertEquals(HttpResponseStatus.INTERNAL_SERVER_ERROR.code(), response.statusCode())
|
||||||
|
}
|
||||||
}
|
}
|
@@ -7,9 +7,10 @@
|
|||||||
read-idle-timeout="PT10M"
|
read-idle-timeout="PT10M"
|
||||||
write-idle-timeout="PT11M"
|
write-idle-timeout="PT11M"
|
||||||
idle-timeout="PT30M"
|
idle-timeout="PT30M"
|
||||||
max-request-size="101325"/>
|
max-request-size="101325"
|
||||||
|
chunk-size="0xa910"/>
|
||||||
<event-executor use-virtual-threads="false"/>
|
<event-executor use-virtual-threads="false"/>
|
||||||
<cache xs:type="rbcs:fileSystemCacheType" path="/tmp/rbcs" max-age="P7D" chunk-size="0xa910"/>
|
<cache xs:type="rbcs:fileSystemCacheType" path="/tmp/rbcs" max-age="P7D"/>
|
||||||
<authentication>
|
<authentication>
|
||||||
<none/>
|
<none/>
|
||||||
</authentication>
|
</authentication>
|
||||||
|
@@ -9,9 +9,10 @@
|
|||||||
max-request-size="67108864"
|
max-request-size="67108864"
|
||||||
idle-timeout="PT30S"
|
idle-timeout="PT30S"
|
||||||
read-idle-timeout="PT60S"
|
read-idle-timeout="PT60S"
|
||||||
write-idle-timeout="PT60S"/>
|
write-idle-timeout="PT60S"
|
||||||
|
chunk-size="123"/>
|
||||||
<event-executor use-virtual-threads="true"/>
|
<event-executor use-virtual-threads="true"/>
|
||||||
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" chunk-size="123">
|
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D">
|
||||||
<server host="memcached" port="11211"/>
|
<server host="memcached" port="11211"/>
|
||||||
</cache>
|
</cache>
|
||||||
<authorization>
|
<authorization>
|
||||||
|
@@ -8,9 +8,10 @@
|
|||||||
read-idle-timeout="PT10M"
|
read-idle-timeout="PT10M"
|
||||||
write-idle-timeout="PT11M"
|
write-idle-timeout="PT11M"
|
||||||
idle-timeout="PT30M"
|
idle-timeout="PT30M"
|
||||||
max-request-size="101325"/>
|
max-request-size="101325"
|
||||||
|
chunk-size="456"/>
|
||||||
<event-executor use-virtual-threads="false"/>
|
<event-executor use-virtual-threads="false"/>
|
||||||
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" digest="SHA-256" chunk-size="456" compression-mode="deflate" compression-level="7">
|
<cache xs:type="rbcs-memcache:memcacheCacheType" max-age="P7D" digest="SHA-256" compression-mode="deflate" compression-level="7">
|
||||||
<server host="127.0.0.1" port="11211" max-connections="10" connection-timeout="PT20S"/>
|
<server host="127.0.0.1" port="11211" max-connections="10" connection-timeout="PT20S"/>
|
||||||
</cache>
|
</cache>
|
||||||
<authentication>
|
<authentication>
|
||||||
|
@@ -7,9 +7,10 @@
|
|||||||
read-idle-timeout="PT10M"
|
read-idle-timeout="PT10M"
|
||||||
write-idle-timeout="PT11M"
|
write-idle-timeout="PT11M"
|
||||||
idle-timeout="PT30M"
|
idle-timeout="PT30M"
|
||||||
max-request-size="4096"/>
|
max-request-size="4096"
|
||||||
|
chunk-size="0xa91f"/>
|
||||||
<event-executor use-virtual-threads="false"/>
|
<event-executor use-virtual-threads="false"/>
|
||||||
<cache xs:type="rbcs:inMemoryCacheType" max-age="P7D" chunk-size="0xa91f"/>
|
<cache xs:type="rbcs:inMemoryCacheType" max-age="P7D"/>
|
||||||
<authorization>
|
<authorization>
|
||||||
<users>
|
<users>
|
||||||
<user name="user1" password="password1">
|
<user name="user1" password="password1">
|
||||||
|
Reference in New Issue
Block a user