Skip to content

Commit 27ad5b0

Browse files
dd-mergequeue[bot]maxepmariedmcdn34ddsimaoseica-dd
authored
Merge pull request #2282 from DataDog/release/2.27.0
Release `2.27.0` Co-authored-by: maxep <maxime.epain@datadoghq.com> Co-authored-by: dd-mergequeue[bot] <121105855+dd-mergequeue[bot]@users.noreply.github.com> Co-authored-by: mariedm <marie.denis@datadoghq.com> Co-authored-by: cdn34dd <carlos.nogueira@datadoghq.com> Co-authored-by: simaoseica-dd <simao.seica@datadoghq.com> Co-authored-by: xgouchet <xgouchet@users.noreply.github.com> Co-authored-by: ncreated <maciek.grzybowski@datadoghq.com>
2 parents 868b600 + 1f22940 commit 27ad5b0

File tree

340 files changed

+10984
-6372
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

340 files changed

+10984
-6372
lines changed

BenchmarkTests/BenchmarkTests.xcodeproj/project.pbxproj

Lines changed: 83 additions & 79 deletions
Large diffs are not rendered by default.

BenchmarkTests/Benchmarks/Sources/Benchmarks.swift

Lines changed: 2 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -83,7 +83,7 @@ public enum Benchmarks {
8383
)
8484
)
8585

86-
let provider = MeterProviderBuilder()
86+
return MeterProviderBuilder()
8787
.with(pushInterval: 10)
8888
.with(processor: MetricProcessorSdk())
8989
.with(exporter: metricExporter)
@@ -98,9 +98,6 @@ public enum Benchmarks {
9898
"branch": .string(configuration.context.branch),
9999
]))
100100
.build()
101-
102-
OpenTelemetry.registerMeterProvider(meterProvider: provider)
103-
return provider
104101
}
105102

106103
/// Configure an OpenTelemetry tracer provider.
@@ -121,11 +118,8 @@ public enum Benchmarks {
121118
let exporter = try! DatadogExporter(config: exporterConfiguration)
122119
let processor = SimpleSpanProcessor(spanExporter: exporter)
123120

124-
let provider = TracerProviderBuilder()
121+
return TracerProviderBuilder()
125122
.add(spanProcessor: processor)
126123
.build()
127-
128-
OpenTelemetry.registerTracerProvider(tracerProvider: provider)
129-
return provider
130124
}
131125
}

BenchmarkTests/README.md

Lines changed: 60 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -10,51 +10,6 @@ To open the project with the correct environment, make sure Xcode is closed and
1010
make benchmark-tests-open
1111
```
1212

13-
> [!TIP]
14-
> If Xcode is failing to install packages with a message such as **failed to clone**, **requires authentication** or **dependency failed**, it might be due to a Git configuration that forces HTTPS URLs to be rewritten as SSH. In that case, try commenting the following lines in your global Git config (~/.gitconfig).
15-
16-
```bash
17-
[url "ssh://git@github.com/"]
18-
insteadOf = https://github.com/
19-
```
20-
21-
> Or remove it with the following command:
22-
23-
```bash
24-
git config --global --unset url."ssh://git@github.com/".insteadOf
25-
```
26-
27-
## CI
28-
29-
CI continuously builds, signs, and uploads a runner application to Synthetics, which runs predefined tests.
30-
31-
### Build
32-
33-
Before building the application, make sure the `BenchmarkTests/xcconfigs/Benchmark.local.xcconfig` configuration file is present and contains the `Mobile - Integration Org` client token, RUM application ID, and API Key. These values are sensitive and must be securely stored.
34-
35-
```ini
36-
CLIENT_TOKEN=
37-
RUM_APPLICATION_ID=
38-
API_KEY=
39-
```
40-
41-
### Sign
42-
43-
To sign the runner application, the certificate and provision profile defined in [Synthetics.xcconfig](xcconfigs/Synthetics.xcconfig) and in [exportOptions.plist](exportOptions.plist) needs to be installed on the build machine. The certificate and profile are sensitive files and must be securely stored. Make sure to update both files when updating the certificate and provisioning profile, otherwise signing fails.
44-
45-
> [!NOTE]
46-
> Certificate & Provisioning Profile are also available through the [App Store Connect API](https://developer.apple.com/documentation/appstoreconnectapi). But we don't have the tooling in place.
47-
48-
### Upload
49-
50-
The application version (build number) is set to the commit SHA of the current job, and the build is uploaded to Synthetics using the [datadog-ci](https://github.com/DataDog/datadog-ci) CLI. This step expects environment variables to authenticate with the `Mobile - Integration Org`:
51-
52-
```bash
53-
export DATADOG_API_KEY=
54-
export DATADOG_APP_KEY=
55-
export S8S_APPLICATION_ID=
56-
```
57-
5813
## Development
5914

6015
Each scenario is independent and can be considered as an app within the runner.
@@ -93,8 +48,66 @@ struct LogsScenario: Scenario {
9348
}
9449
```
9550

96-
Add the test to the [`SyntheticScenario`](Runner/Scenarios/SyntheticScenario.swift#L12) object so it can be selected by setting the `BENCHMARK_SCENARIO` environment variable.
51+
Once the scenario is created, add its name as an enum case of the [`SyntheticScenario`](Runner/Scenarios/SyntheticScenario.swift#L12) object so it can be selected by setting the `BENCHMARK_SCENARIO` environment variable.
52+
53+
### Run a scenario
54+
55+
There are two ways to execute a scenario:
56+
1. Use environment variables in the `Runner.xcscheme` (Edit Scheme from Xcode)
57+
```xml
58+
<EnvironmentVariables>
59+
<EnvironmentVariable
60+
key = "BENCHMARK_RUN"
61+
value = "<run>"
62+
isEnabled = "YES">
63+
</EnvironmentVariable>
64+
<EnvironmentVariable
65+
key = "BENCHMARK_SCENARIO"
66+
value = "<scenario>"
67+
isEnabled = "YES">
68+
</EnvironmentVariable>
69+
</EnvironmentVariables>
70+
```
71+
2. Open a deep link:
72+
```bash
73+
xcrun simctl openurl booted 'bench://start?scenario=<scenario>&run=<run>'
74+
```
75+
If you need to execute multiple scenarios/runs in the same application process, you must stop the previous execution with:
76+
```bash
77+
xcrun simctl openurl booted 'bench://stop'
78+
```
9779

9880
### Synthetics Configuration
9981

100-
Please refer to [Confluence page (internal)](https://datadoghq.atlassian.net/wiki/spaces/RUMP/pages/3981476482/Benchmarks+iOS)
82+
Please refer to the [Confluence page (internal)](https://datadoghq.atlassian.net/wiki/spaces/RUMP/pages/3981476482/Benchmarks+iOS)
83+
84+
## CI
85+
86+
CI continuously builds, signs, and uploads a runner application to Synthetics, which runs predefined tests.
87+
88+
### Build
89+
90+
Before building the application, make sure the `BenchmarkTests/xcconfigs/Benchmark.local.xcconfig` configuration file is present and contains the `Mobile - Integration Org` client token, RUM application ID, and API Key. These values are sensitive and must be securely stored.
91+
92+
```ini
93+
CLIENT_TOKEN=
94+
RUM_APPLICATION_ID=
95+
API_KEY=
96+
```
97+
98+
### Sign
99+
100+
To sign the runner application, the certificate and provision profile defined in [Synthetics.xcconfig](xcconfigs/Synthetics.xcconfig) and in [exportOptions.plist](exportOptions.plist) needs to be installed on the build machine. The certificate and profile are sensitive files and must be securely stored. Make sure to update both files when updating the certificate and provisioning profile, otherwise signing fails.
101+
102+
> [!NOTE]
103+
> Certificate & Provisioning Profile are also available through the [App Store Connect API](https://developer.apple.com/documentation/appstoreconnectapi). But we don't have the tooling in place.
104+
105+
### Upload
106+
107+
The application version (build number) is set to the commit SHA of the current job, and the build is uploaded to Synthetics using the [datadog-ci](https://github.com/DataDog/datadog-ci) CLI. This step expects environment variables to authenticate with the `Mobile - Integration Org`:
108+
109+
```bash
110+
export DATADOG_API_KEY=
111+
export DATADOG_APP_KEY=
112+
export S8S_APPLICATION_ID=
113+
```

BenchmarkTests/Runner/AppDelegate.swift

Lines changed: 114 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -5,51 +5,67 @@
55
*/
66

77
import UIKit
8+
89
import DatadogInternal
10+
import DatadogCore
911
import DatadogBenchmarks
1012

1113
@main
1214
class AppDelegate: UIResponder, UIApplicationDelegate {
1315
var window: UIWindow?
16+
var applicationInfo: AppInfo! //swiftlint:disable:this implicitly_unwrapped_optional
17+
var vitals: Vitals?
1418

1519
func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
16-
guard let scenario = SyntheticScenario() else {
20+
applicationInfo = try! AppInfo() // crash if info are missing or malformed
21+
22+
window = UIWindow(frame: UIScreen.main.bounds)
23+
24+
if let scenario = SyntheticScenario() {
25+
let run = SyntheticRun()
26+
start(scenario: scenario, run: run)
27+
} else {
28+
window?.rootViewController = UIViewController()
29+
}
30+
31+
window?.makeKeyAndVisible()
32+
return true
33+
}
34+
35+
func application(_ app: UIApplication, open url: URL, options: [UIApplication.OpenURLOptionsKey: Any] = [:]) -> Bool {
36+
guard let components = URLComponents(url: url, resolvingAgainstBaseURL: false) else {
1737
return false
1838
}
1939

20-
let run = SyntheticRun()
21-
let applicationInfo = try! AppInfo() // crash if info are missing or malformed
40+
// bench://stop
41+
if components.host == "stop" {
42+
stop()
43+
return true
44+
}
2245

23-
// Collect metrics during all run
24-
let meter = Meter(
25-
provider: Benchmarks.meterProvider(
26-
with: Benchmarks.Configuration(
27-
info: applicationInfo,
28-
scenario: scenario,
29-
run: run
30-
)
31-
)
32-
)
46+
// bench://start?scenario=<scenario>&run=<run>
47+
if components.host == "start", let scenario = SyntheticScenario(urlComponents: components), let run = SyntheticRun(urlComponents: components) {
48+
start(scenario: scenario, run: run)
49+
return true
50+
}
51+
52+
return false
53+
}
3354

55+
/// Starts instruments for the given run and scenario.
56+
///
57+
/// - Parameters:
58+
/// - scenario: The benchmark scenario.
59+
/// - run: The benchmark run.
60+
private func start(
61+
scenario: SyntheticScenario,
62+
run: SyntheticRun
63+
) {
3464
switch run {
3565
case .baseline, .instrumented:
36-
meter.observeCPU()
37-
meter.observeMemory()
38-
meter.observeFPS()
39-
66+
collectApplicationVitals(scenario: scenario, run: run)
4067
case .profiling:
41-
// Collect traces during profiling run
42-
let profiler = Profiler(
43-
provider: Benchmarks.tracerProvider(
44-
with: Benchmarks.Configuration(
45-
info: applicationInfo,
46-
scenario: scenario,
47-
run: run
48-
)
49-
)
50-
)
51-
52-
DatadogInternal.bench = (profiler, meter)
68+
profileSDK(scenario: scenario, run: run)
5369
case .none:
5470
break
5571
}
@@ -60,11 +76,77 @@ class AppDelegate: UIResponder, UIApplicationDelegate {
6076
scenario.instrument(with: applicationInfo)
6177
}
6278

63-
window = UIWindow(frame: UIScreen.main.bounds)
6479
window?.rootViewController = scenario.initialViewController
65-
window?.makeKeyAndVisible()
80+
}
6681

67-
return true
82+
/// Stops all current instruments.
83+
///
84+
/// The same process can run multiple scenarios and instruments when receiving a deeplink.
85+
/// It is important to stop current instruments before starting a new run.
86+
private func stop() {
87+
vitals = nil // stop collecting vitals
88+
Datadog.stopInstance() // stop runner instrumentation
89+
DatadogInternal.bench = (NOPBench(), NOPBench()) // stop profiling the sdk
90+
window?.rootViewController = UIViewController()
91+
}
92+
93+
/// Starts collection vitals of the runner application.
94+
///
95+
/// - Parameters:
96+
/// - scenario: The benchmark scenario.
97+
/// - run: The benchmark run.
98+
private func collectApplicationVitals(
99+
scenario: SyntheticScenario,
100+
run: SyntheticRun
101+
) {
102+
let vitals = Vitals(
103+
provider: Benchmarks.meterProvider(
104+
with: Benchmarks.Configuration(
105+
info: applicationInfo,
106+
scenario: scenario,
107+
run: run
108+
)
109+
)
110+
)
111+
112+
vitals.observeCPU()
113+
vitals.observeMemory()
114+
vitals.observeFPS()
115+
116+
self.vitals = vitals // Keep vitals in memory
117+
}
118+
119+
/// Starts profiling the SDK while running the application.
120+
///
121+
/// - Parameters:
122+
/// - scenario: The benchmark scenario.
123+
/// - run: The benchmark run.
124+
private func profileSDK(
125+
scenario: SyntheticScenario,
126+
run: SyntheticRun
127+
) {
128+
// Collect traces during profiling run
129+
let profiler = Profiler(
130+
provider: Benchmarks.tracerProvider(
131+
with: Benchmarks.Configuration(
132+
info: applicationInfo,
133+
scenario: scenario,
134+
run: run
135+
)
136+
)
137+
)
138+
139+
let meter = Meter(
140+
provider: Benchmarks.meterProvider(
141+
with: Benchmarks.Configuration(
142+
info: applicationInfo,
143+
scenario: scenario,
144+
run: run
145+
)
146+
)
147+
)
148+
149+
DatadogInternal.bench = (profiler, meter) // Inject profiler and meter to collect telemetry
68150
}
69151
}
70152

BenchmarkTests/Runner/BenchmarkMeter.swift

Lines changed: 0 additions & 42 deletions
Original file line numberDiff line numberDiff line change
@@ -8,59 +8,17 @@ import Foundation
88
import DatadogInternal
99
import DatadogBenchmarks
1010
import OpenTelemetryApi
11-
import OpenTelemetrySdk
1211

1312
internal final class Meter: DatadogInternal.BenchmarkMeter {
1413
let meter: OpenTelemetryApi.Meter
1514

16-
let queue = DispatchQueue(label: "com.datadoghq.benchmarks.metrics", target: .global(qos: .utility))
17-
1815
init(provider: MeterProvider) {
1916
self.meter = provider.get(
2017
instrumentationName: "benchmarks",
2118
instrumentationVersion: nil
2219
)
2320
}
2421

25-
@discardableResult
26-
func observeMemory() -> OpenTelemetryApi.DoubleObserverMetric {
27-
let memory = Memory(queue: queue)
28-
return meter.createDoubleObservableGauge(name: "ios.benchmark.memory") { metric in
29-
// report the maximum memory footprint that was recorded during push interval
30-
if let value = memory.aggregation?.max {
31-
metric.observe(value: value, labelset: .empty)
32-
}
33-
34-
memory.reset()
35-
}
36-
}
37-
38-
@discardableResult
39-
func observeCPU() -> OpenTelemetryApi.DoubleObserverMetric {
40-
let cpu = CPU(queue: queue)
41-
return meter.createDoubleObservableGauge(name: "ios.benchmark.cpu") { metric in
42-
// report the average cpu usage that was recorded during push interval
43-
if let value = cpu.aggregation?.avg {
44-
metric.observe(value: value, labelset: .empty)
45-
}
46-
47-
cpu.reset()
48-
}
49-
}
50-
51-
@discardableResult
52-
func observeFPS() -> OpenTelemetryApi.IntObserverMetric {
53-
let fps = FPS()
54-
return meter.createIntObservableGauge(name: "ios.benchmark.fps.min") { metric in
55-
// report the minimum frame rate that was recorded during push interval
56-
if let value = fps.aggregation?.min {
57-
metric.observe(value: value, labelset: .empty)
58-
}
59-
60-
fps.reset()
61-
}
62-
}
63-
6422
func counter(metric: @autoclosure () -> String) -> DatadogInternal.BenchmarkCounter {
6523
meter.createDoubleCounter(name: metric())
6624
}

BenchmarkTests/Runner/BenchmarkProfiler.swift

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,6 @@ import Foundation
88
import DatadogInternal
99
import DatadogBenchmarks
1010
import OpenTelemetryApi
11-
import OpenTelemetrySdk
1211

1312
internal final class Profiler: DatadogInternal.BenchmarkProfiler {
1413
let provider: TracerProvider

0 commit comments

Comments
 (0)