You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: webdocs/_index.md
+2-2Lines changed: 2 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -3,9 +3,9 @@ title: Chall-Manager
3
3
description: Chall-Manager solves the Challenge Instances on Demand problem through a future-proof generalization. It can deploy anything, anywhere, at any time !
4
4
---
5
5
6
-
{{% alert title="Warning" color="warning" %}}
6
+
{{< alert title="Warning" color="warning" >}}
7
7
Currently entering public beta phase, for any issue: ctfer-io@protonmail.com
8
-
{{% /alert %}}
8
+
{{< /alert >}}
9
9
10
10
Chall-Manager is a MicroService whose goal is to manage challenges and their instances.
11
11
Each of those instances are self-operated by the source (either team or user) which rebalance the concerns to empowers them.
Copy file name to clipboardExpand all lines: webdocs/challmaker-guides/create-scenario/index.md
+9-5Lines changed: 9 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -127,7 +127,10 @@ pulumi up # preview and deploy
127
127
## Pack it up !
128
128
129
129
Now that your scenario is designed and coded accordingly to your artistic direction, you have to prepare it for an OCI registry such that Chall-Manager can process it.
130
-
Make sure to remove all unnecessary files, and pack the directory it is contained within.
130
+
Make sure to remove all unnecessary files, such that the scenario is mimimal.
131
+
132
+
A scenario is an OCI blob that is delivered through an OCI registry (e.g. [Docker Registry](https://hub.docker.com/_/registry), [Zot](https://github.com/project-zot/zot), [Artifactory](https://github.com/project-zot/zot)). To ease the creation and distribution of a scenario we will use `chall-manager-cli`.
133
+
It will pack the files of a `directory` inside an OCI blob of data with annotation `application/vnd.ctfer-io.file`, and push it in the registry.
131
134
132
135
```bash
133
136
cd ..
@@ -138,16 +141,17 @@ oras push \
138
141
$(find scenario -type f)
139
142
```
140
143
144
+
Authentication is optional yet recommended. The same holds for certificate validation that could be turned off with `--insecure`.
145
+
141
146
And you're done. Yes, it was that easy :)
142
147
143
148
But it could be even more [using the SDK](/docs/chall-manager/challmaker-guides/software-development-kit) !
Copy file name to clipboardExpand all lines: webdocs/challmaker-guides/flag-variation-engine.md
+14-15Lines changed: 14 additions & 15 deletions
Original file line number
Diff line number
Diff line change
@@ -5,32 +5,33 @@ categories: [How-to Guides]
5
5
tags: [Anticheat]
6
6
---
7
7
8
-
Shareflag is considered by some as the worst part of competitions leading to unfair events, while some others consider this a strategy.
9
-
We consider this a problem we could solve.
8
+
Shareflag is widely frowned upon by participants, as it undermines the spirit of fair play and learning.
9
+
It not only skews the scoreboard but also devalues the hard work of those who solve challenges independently. This behavior creates frustration among honest teams and can diminish the overall experience, turning the event into a disheartening one.
10
+
11
+
Nevertheless, Chall-Manager enables you to **solve shareflag**.
10
12
11
13
## Context
12
14
13
-
In "standard" CTFs as we could most see them, it is impossible to solve this problem: if everyone has the same binary to reverse-engineer, how can you differentiate the flag per each team thus avoid shareflag ?
15
+
In common CTFs, a challenge has a flag that must be found by players in order to claim the points. But it implies that everyone has been provided the same content, e.g. the same binary to reverse-engineer.
16
+
With such approach, how can you differentiate the flag per each team thus detect -if not avoid- shareflag ?
14
17
15
-
For this, you have to variate the flag for each source. One simple solution is to [use the SDK](#use-the-sdk).
18
+
A perfect solution would be to have a flag for each source. One simple solution is to [use the SDK](#use-the-sdk).
16
19
17
20
## Use the SDK
18
21
19
-
The SDK can variate a given input with human-readable equivalent characters in the ASCII-extended charset, making it handleable for CTF platforms (at least we expect it). If one character is out of those ASCII-character, it will be untouched.
20
-
21
-
To import this part of the SDK, execute the following.
22
+
The SDK can **variate** a given input with human-readable equivalent characters in the ASCII-extended charset, making it handleable for CTF platforms (or at least we expect so). If one character is out of those ASCII-character, it will remain unchanged.
22
23
23
-
```bash
24
-
go get github.com/ctfer-io/chall-manager/sdk
25
-
```
26
-
27
-
Then, in your scenario, you can create a constant that contains the "base flag" (i.e. the unvariated flag).
24
+
In your scenario, you can create a constant that contains the original flag.
28
25
29
26
```go
30
27
const flag = "my-supper-flag"
31
28
```
32
29
33
-
Finally, you can export the variated flag.
30
+
Then, you can simply pass it to the SDK to variate it. It will use the identity of the instance as the PRNG seed, thus you should avoid passing it to the players.
31
+
32
+
If you want to use decorator around the flag (e.g. `BREFCTF{}`), don't put it in the `flag` constant else it will be variated.
33
+
34
+
A complete example follows.
34
35
35
36
{{< tabpane code=true >}}
36
37
{{< tab header="SDK" lang="go" >}}
@@ -85,5 +86,3 @@ func main() {
85
86
}
86
87
{{< /tab >}}
87
88
{{< /tabpane >}}
88
-
89
-
If you want to use decorator around the flag (e.g. `BREFCTF{}`), don't put it in the `flag` constant else it will be variated.
Copy file name to clipboardExpand all lines: webdocs/challmaker-guides/software-development-kit/index.md
+23-31Lines changed: 23 additions & 31 deletions
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ tags: [SDK, Kubernetes]
6
6
---
7
7
8
8
When you (a ChallMaker) want to deploy a single container specific for each [source](/docs/chall-manager/glossary#source), you don't want to understand how to deploy it to a specific provider. In fact, your technical expertise does not imply you are a Cloud expert... And it was not to expect !
9
-
Writing a 500-lines long [scenario](/docs/chall-manager/glossary#scenario) fitting the API only to deploy a container is a tedious job you don't want to do more than once: create a deployment, the service, possibly the ingress, have a configuration and secrets to handle...
9
+
Writing a 500-lines long [scenario](/docs/chall-manager/glossary#scenario) fitting the API only to deploy a container in a hardened environment is a tedious job you don't have time for.
10
10
11
11
For this reason, we built a Software Development Kit to ease your use of chall-manager.
12
12
It contains all the features of the chall-manager without passing you the issues of API compliance.
@@ -15,39 +15,38 @@ Additionnaly, we prepared some common use-cases factory to help you _focus on yo
The community is free to create new pre-made recipes, and we welcome contributions to add new official ones. Please open an issue as a Request For Comments, and a Pull Request if possible to propose an implementation.
18
+
The community is **free to create and distribute new (or alternatives) pre-made recipes**, and we welcome contributions to add new official ones. Please open an issue as a Request For Comments, and a Pull Request if possible to propose an implementation.
19
19
20
20
## Build scenarios
21
21
22
-
Fitting the chall-manager scenario API imply fitting inputs and outputs models.
23
-
24
-
Even if easy, it still requires work, and functionalities or evolutions does not guarantee you easy maintenance: offline compatibility with OCI registry, pre-configured providers, etc.
25
-
26
-
Indeed, if you are dealing with a chall-manager deployed in a Kubernetes cluster, the `...pulumi.ResourceOption` contains a pre-configured provider such that every Kubernetes resources the scenario will create, they will be deployed in the proper namespace.
22
+
The common API for chall-manager scenario is very simple, defined per inputs and outputs.
23
+
They could be respectively fetched from the stack configuration and exported through stack outputs.
27
24
28
25
### Inputs
29
26
30
-
Those are fetchable from the Pulumi configuration.
31
-
32
27
| Name | Required | Description |
33
28
|---|:---:|---|
34
-
|`identity`| ✅ |the[identity](/docs/chall-manager/glossary#identity) of the Challenge on Demand request |
29
+
|`identity`| ✅ |The[identity](/docs/chall-manager/glossary#identity) of the Challenge on Demand request.|
35
30
36
31
### Outputs
37
32
38
-
Those should be exported from the Pulumi context.
39
-
40
33
| Name | Required | Description |
41
34
|---|:---:|---|
42
-
|`connection_info`| ✅ |the connection information, as a string (e.g. `curl http://a4...d6.my-ctf.lan`) |
43
-
|`flag`| ❌ |the identity-specific flag the CTF platform should only validate for the given [source](/docs/chall-manager/glossary#source)|
35
+
|`connection_info`| ✅ |The connection information, as a string (e.g. `curl http://a4...d6.my-ctf.lan`) |
36
+
|`flag`| ❌ |The identity-specific flag the CTF platform should only validate for the given [source](/docs/chall-manager/glossary#source)|
44
37
45
38
## Kubernetes ExposedMonopod
46
39
47
-
When you want to deploy a challenge composed of a single container, on a Kubernetes cluster, you want it to be fast and easy.
40
+
**Fit:**deploy a single container on a Kubernetes cluster.
48
41
49
-
Then, the Kubernetes `ExposedMonopod` fits your needs ! You can easily configure the container you are looking for and deploy it to production in the next seconds.
50
-
The following shows you how easy it is to write a scenario that creates a Deployment with a single replica of a container, exposes a port through a service, then build the ingress specific to the [identity](/docs/chall-manager/glossary#identity) and finally provide the connection information as a `curl` command.
42
+
The `kubernetes.ExposedMonopod` helps you deploy it as a single Pod in a Deployment, expose it with 1 Service per port and if requested 1 Ingress per port.
43
+
44
+
{{< imgproc kubernetes-exposedmonopod Fit "800x800" >}}
45
+
The Kubernetes ExposedMonopod architecture for deployed resources.
46
+
{{< /imgproc >}}
47
+
48
+
The following is an example from the [24h IUT 2023](https://github.com/pandatix/24hiut-2023-cyber) usage of this SDK resource such that it deploys the Docker image `pandatix/license-lvl1:latest`, and expose port `8080` (implicitely using TCP) through an ingress. The result is used to create the connection information i.e. a `curl` example command.
49
+
For more info on configuration, please refer to the [code base](https://github.com/ctfer-io/chall-manager/blob/main/sdk/kubernetes/exposed-monopod.go).
To use ingresses, make sure your Kubernetes cluster can deal with them: have an ingress controller (e.g. [Traefik](https://traefik.io/)), and DNS resolution points to the Kubernetes cluster.
88
87
{{< /alert >}}
89
88
90
-
{{< imgproc kubernetes-exposedmonopod Fit "800x800" >}}
91
-
The Kubernetes ExposedMonopod architecture for deployed resources.
92
-
{{< /imgproc >}}
89
+
## Kubernetes ExposedMultipod
93
90
94
-
<!-- TODO provide ExposedMonopod configuration (attributes, required/optional, type, description) -->
91
+
**Fit:** deploy a network of containers on a Kubernetes cluster.
95
92
96
-
##Kubernetes ExposedMultipod
93
+
The Kubernetes `ExposedMultipod` helps you deploy many pods with as many deployments, services for each port of each container, ingress whenever required. It is a generalization of the [Kubernetes ExposedMonopod](#kubernetes-exposedmonopod).
97
94
98
-
When you want to deploy multiple containers together (e.g. a web app with a frontend, a backend, a database and a cache), on a Kubernetes cluster, and want it to be fast and easy.
95
+
{{< imgproc kubernetes-exposedmultipod Fit "800x800" >}}
96
+
The Kubernetes ExposedMultipod architecture for deployed resources.
97
+
{{< /imgproc >}}
99
98
100
-
Then, the Kubernetes `ExposedMultipod` fits your needs ! Your can easily configure the containers and the networking rules between them so it deploys to production in the next seconds.
101
-
The following shows you how easy it is to write a scenario that creates multiple deployments, services, ingresses, configmaps, ... and provide the connection information as a `curl` command.
99
+
The following is an example from the [NoBrackets 2024](https://github.com/nobrackets-ctf/NoBrackets-2024) usage of this SDK resource such that it deploys the web _vip-only_ challenge from [Drahoxx](https://x.com/50mgDrahoxx). It is composed of a NodeJS service and a MongoDB. The first is exposed through an ingress, while the other remains internal. The single rule enables traffic from the first to the second on port `27017` (implicitely using TCP).
To use ingresses, make sure your Kubernetes cluster can deal with them: have an ingress controller (e.g. [Traefik](https://traefik.io/)), and DNS resolution points to the Kubernetes cluster.
159
157
{{< /alert >}}
160
-
161
-
{{< imgproc kubernetes-exposedmultipod Fit "800x800" >}}
162
-
The Kubernetes ExposedMultipod architecture for deployed resources.
163
-
{{< /imgproc >}}
164
-
165
-
The ExposedMultipod is a generalization of the [ExposedMonopod](#kubernetes-exposedmonopod) with \[n\] containers. In fact, the later's implementation passes its container to the first as a network of a single container.
Copy file name to clipboardExpand all lines: webdocs/challmaker-guides/update-in-production/index.md
+6-8Lines changed: 6 additions & 8 deletions
Original file line number
Diff line number
Diff line change
@@ -19,20 +19,18 @@ We adopted the reflexions of [The Update Framework](https://theupdateframework.i
19
19
20
20
## What to do
21
21
22
-
You will have to update the[scenario](/docs/chall-manager/glossary#scenario), of course.
23
-
Once it is fixed and validated, archive the new version.
22
+
You will have to create a new[scenario](/docs/chall-manager/glossary#scenario), of course.
23
+
Then, you will have to update the challenge configuration to provide this new scenario and an update strategy. If no strategy is specified, it defaults to `update-in-place`.
24
24
25
-
Then, you'll have to pick up an Update Strategy.
25
+
Chall-Manager will temporarily block operations on this challenge, and update all existing instances.
26
+
This makes the process predictible and reproductible, thus you can test in a pre-production environment before production (and we recommend you so). It also avoids human errors during fix, and lower the burden at scale.
| Update in place | ✅ | ✅ | ✅ | ✅ | Efficient in time & cost ; require high maturity |
30
31
| Blue-Green | ❌ | ✅ | ❌ | ✅ | Efficient in time ; costfull |
31
32
| Recreate | ❌ | ❌ | ✅ | ❌ | Efficient in cost ; time consuming |
32
33
33
-
¹ Robustness of both the provider and resources updates. Robustness is the capability of a resource to be finely updated without re-creation.
34
+
¹ Robustness of both the provider and resources updates. Robustness is the capability of a scenario to be finely updated without complete re-creation.
34
35
35
-
More information on the selection of those models and how they work internally is available in the [design documentation](/docs/chall-manager/design/hot-update).
36
-
37
-
You'll only have to update the challenge, specifying the Update Strategy of your choice. Chall-Manager will temporarily block operations on this challenge, and update all existing instances.
38
-
This makes the process predictible and reproductible, thus you can test in a pre-production environment before production. It also avoids human errors during fix, and lower the burden at scale.
36
+
More information on how they work internally is available in the [design documentation](/docs/chall-manager/design/hot-update).
0 commit comments