Skip to content

Commit 5a57ee8

Browse files
authored
Merge pull request #333 from liyinan926/master
Updated to use the v1beta1 version of the APIs
2 parents 97d4b14 + c41576b commit 5a57ee8

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

53 files changed

+741
-664
lines changed

README.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,11 @@
1010

1111
## Project Status
1212

13-
**Project status:** *alpha*
13+
**Project status:** *beta*
1414

15-
The Kubernetes Operator for Apache Spark is still under active development. Backward compatibility of the APIs is not guaranteed for alpha releases.
15+
The Kubernetes Operator for Apache Spark is under active development, but backward compatibility of the APIs is guaranteed for beta releases.
16+
17+
**If you are currently using the `v1alpha1` version of the APIs in your manifests, please update them to use the `v1beta1` version by changing `apiVersion: "sparkoperator.k8s.io/v1alpha1"` to `apiVersion: "sparkoperator.k8s.io/v1beta1"`. You will also need to delete the `v1alpha1` version of the CustomResourceDefinitions named `sparkapplications.sparkoperator.k8s.io` and `scheduledsparkapplications.sparkoperator.k8s.io`, and replace them with the `v1beta1` version either by installing the latest version of the operator or by running `kubectl create -f manifest/spark-operator-crds.yaml`.**
1618

1719
Customization of Spark pods, e.g., mounting arbitrary volumes and setting pod affinity, is currently experimental and implemented using a Kubernetes
1820
[Mutating Admission Webhook](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/), which became beta in Kubernetes 1.9.

docs/api.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,8 @@ The Kubernetes Operator for Apache Spark uses [CustomResourceDefinitions](https
44
named `SparkApplication` and `ScheduledSparkApplication` for specifying one-time Spark applications and Spark applications
55
that are supposed to run on a standard [cron](https://en.wikipedia.org/wiki/Cron) schedule. Similarly to other kinds of
66
Kubernetes resources, they consist of a specification in a `Spec` field and a `Status` field. The definitions are organized
7-
in the following structure. The v1alpha1 version of the API definition is implemented
8-
[here](../pkg/apis/sparkoperator.k8s.io/v1alpha1/types.go).
7+
in the following structure. The v1beta1 version of the API definition is implemented
8+
[here](../pkg/apis/sparkoperator.k8s.io/v1beta1/types.go).
99

1010
```
1111
ScheduledSparkApplication

docs/gcp.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ The ones set in `core-site.xml` apply to all applications using the image. Also
4343
variable `GCS_PROJECT_ID` must be set when using the image at `gcr.io/ynli-k8s/spark:v2.3.0-gcs`.
4444

4545
```yaml
46-
apiVersion: "sparkoperator.k8s.io/v1alpha1"
46+
apiVersion: "sparkoperator.k8s.io/v1beta1"
4747
kind: SparkApplication
4848
metadata:
4949
name: foo-gcs-bg

docs/quick-start-guide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -135,7 +135,7 @@ $ kubectl get sparkapplications spark-pi -o=yaml
135135
This will show something similar to the following:
136136

137137
```yaml
138-
apiVersion: sparkoperator.k8s.io/v1alpha1
138+
apiVersion: sparkoperator.k8s.io/v1beta1
139139
kind: SparkApplication
140140
metadata:
141141
...

docs/user-guide.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ It also has fields for specifying the unified container image (to use for both t
4545
Below is an example showing part of a `SparkApplication` specification:
4646

4747
```yaml
48-
apiVersion: sparkoperator.k8s.io/v1alpha1
48+
apiVersion: sparkoperator.k8s.io/v1beta1
4949
kind: SparkApplication
5050
metadata:
5151
name: spark-pi
@@ -387,7 +387,7 @@ client so effectively the driver gets restarted.
387387
The operator supports running a Spark application on a standard [cron](https://en.wikipedia.org/wiki/Cron) schedule using objects of the `ScheduledSparkApplication` custom resource type. A `ScheduledSparkApplication` object specifies a cron schedule on which the application should run and a `SparkApplication` template from which a `SparkApplication` object for each run of the application is created. The following is an example `ScheduledSparkApplication`:
388388

389389
```yaml
390-
apiVersion: "sparkoperator.k8s.io/v1alpha1"
390+
apiVersion: "sparkoperator.k8s.io/v1beta1"
391391
kind: ScheduledSparkApplication
392392
metadata:
393393
name: spark-pi-scheduled

examples/spark-pi-prometheus.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
# limitations under the License.
1515
#
1616

17-
apiVersion: "sparkoperator.k8s.io/v1alpha1"
17+
apiVersion: "sparkoperator.k8s.io/v1beta1"
1818
kind: SparkApplication
1919
metadata:
2020
name: spark-pi

examples/spark-pi-schedule.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
# limitations under the License.
1515
#
1616

17-
apiVersion: "sparkoperator.k8s.io/v1alpha1"
17+
apiVersion: "sparkoperator.k8s.io/v1beta1"
1818
kind: ScheduledSparkApplication
1919
metadata:
2020
name: spark-pi-scheduled

examples/spark-pi.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@
1414
# limitations under the License.
1515
#
1616

17-
apiVersion: "sparkoperator.k8s.io/v1alpha1"
17+
apiVersion: "sparkoperator.k8s.io/v1beta1"
1818
kind: SparkApplication
1919
metadata:
2020
name: spark-pi

examples/spark-py-pi.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
# Support for Python is experimental, and requires building SNAPSHOT image of Apache Spark,
1717
# with `imagePullPolicy` set to Always
1818

19-
apiVersion: "sparkoperator.k8s.io/v1alpha1"
19+
apiVersion: "sparkoperator.k8s.io/v1beta1"
2020
kind: SparkApplication
2121
metadata:
2222
name: pyspark-pi

manifest/spark-operator-crds.yaml

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ spec:
100100
- Scala
101101
- Python
102102
- R
103-
version: v1alpha1
103+
version: v1beta1
104104
---
105105
apiVersion: apiextensions.k8s.io/v1beta1
106106
kind: CustomResourceDefinition
@@ -202,4 +202,4 @@ spec:
202202
- Scala
203203
- Python
204204
- R
205-
version: v1alpha1
205+
version: v1beta1

manifest/spark-operator-with-metrics.yaml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,13 +21,13 @@ metadata:
2121
namespace: spark-operator
2222
labels:
2323
app.kubernetes.io/name: sparkoperator
24-
app.kubernetes.io/version: v2.4.0-v1alpha1
24+
app.kubernetes.io/version: v2.4.0-v1beta1
2525
spec:
2626
replicas: 1
2727
selector:
2828
matchLabels:
2929
app.kubernetes.io/name: sparkoperator
30-
app.kubernetes.io/version: v2.4.0-v1alpha1
30+
app.kubernetes.io/version: v2.4.0-v1beta1
3131
strategy:
3232
type: Recreate
3333
template:
@@ -38,14 +38,14 @@ spec:
3838
prometheus.io/path: "/metrics"
3939
labels:
4040
app.kubernetes.io/name: sparkoperator
41-
app.kubernetes.io/version: v2.4.0-v1alpha1
41+
app.kubernetes.io/version: v2.4.0-v1beta1
4242
initializers:
4343
pending: []
4444
spec:
4545
serviceAccountName: sparkoperator
4646
containers:
4747
- name: sparkoperator
48-
image: gcr.io/spark-operator/spark-operator:v2.4.0-v1alpha1-latest
48+
image: gcr.io/spark-operator/spark-operator:v2.4.0-v1beta1-latest
4949
imagePullPolicy: Always
5050
ports:
5151
- containerPort: 10254

manifest/spark-operator-with-webhook.yaml

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -21,20 +21,20 @@ metadata:
2121
namespace: spark-operator
2222
labels:
2323
app.kubernetes.io/name: sparkoperator
24-
app.kubernetes.io/version: v2.4.0-v1alpha1
24+
app.kubernetes.io/version: v2.4.0-v1beta1
2525
spec:
2626
replicas: 1
2727
selector:
2828
matchLabels:
2929
app.kubernetes.io/name: sparkoperator
30-
app.kubernetes.io/version: v2.4.0-v1alpha1
30+
app.kubernetes.io/version: v2.4.0-v1beta1
3131
strategy:
3232
type: Recreate
3333
template:
3434
metadata:
3535
labels:
3636
app.kubernetes.io/name: sparkoperator
37-
app.kubernetes.io/version: v2.4.0-v1alpha1
37+
app.kubernetes.io/version: v2.4.0-v1beta1
3838
initializers:
3939
pending: []
4040
spec:
@@ -45,7 +45,7 @@ spec:
4545
secretName: spark-webhook-certs
4646
containers:
4747
- name: sparkoperator
48-
image: gcr.io/spark-operator/spark-operator:v2.4.0-v1alpha1-latest
48+
image: gcr.io/spark-operator/spark-operator:v2.4.0-v1beta1-latest
4949
imagePullPolicy: Always
5050
volumeMounts:
5151
- name: webhook-certs
@@ -63,20 +63,20 @@ metadata:
6363
namespace: spark-operator
6464
labels:
6565
app.kubernetes.io/name: sparkoperator
66-
app.kubernetes.io/version: v2.4.0-v1alpha1
66+
app.kubernetes.io/version: v2.4.0-v1beta1
6767
spec:
6868
backoffLimit: 3
6969
template:
7070
metadata:
7171
labels:
7272
app.kubernetes.io/name: sparkoperator
73-
app.kubernetes.io/version: v2.4.0-v1alpha1
73+
app.kubernetes.io/version: v2.4.0-v1beta1
7474
spec:
7575
serviceAccountName: sparkoperator
7676
restartPolicy: Never
7777
containers:
7878
- name: main
79-
image: gcr.io/spark-operator/spark-operator:v2.4.0-v1alpha1-latest
79+
image: gcr.io/spark-operator/spark-operator:v2.4.0-v1beta1-latest
8080
imagePullPolicy: IfNotPresent
8181
command: ["/usr/bin/gencerts.sh", "-p"]
8282
---
@@ -92,4 +92,4 @@ spec:
9292
name: webhook
9393
selector:
9494
app.kubernetes.io/name: sparkoperator
95-
app.kubernetes.io/version: v2.4.0-v1alpha1
95+
app.kubernetes.io/version: v2.4.0-v1beta1

manifest/spark-operator.yaml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,27 +21,27 @@ metadata:
2121
namespace: spark-operator
2222
labels:
2323
app.kubernetes.io/name: sparkoperator
24-
app.kubernetes.io/version: v2.4.0-v1alpha1
24+
app.kubernetes.io/version: v2.4.0-v1beta1
2525
spec:
2626
replicas: 1
2727
selector:
2828
matchLabels:
2929
app.kubernetes.io/name: sparkoperator
30-
app.kubernetes.io/version: v2.4.0-v1alpha1
30+
app.kubernetes.io/version: v2.4.0-v1beta1
3131
strategy:
3232
type: Recreate
3333
template:
3434
metadata:
3535
labels:
3636
app.kubernetes.io/name: sparkoperator
37-
app.kubernetes.io/version: v2.4.0-v1alpha1
37+
app.kubernetes.io/version: v2.4.0-v1beta1
3838
initializers:
3939
pending: []
4040
spec:
4141
serviceAccountName: sparkoperator
4242
containers:
4343
- name: sparkoperator
44-
image: gcr.io/spark-operator/spark-operator:v2.4.0-v1alpha1-latest
44+
image: gcr.io/spark-operator/spark-operator:v2.4.0-v1beta1-latest
4545
imagePullPolicy: Always
4646
args:
4747
- -logtostderr
Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
/*
2+
Copyright 2017 Google LLC
3+
4+
Licensed under the Apache License, Version 2.0 (the "License");
5+
you may not use this file except in compliance with the License.
6+
You may obtain a copy of the License at
7+
8+
https://www.apache.org/licenses/LICENSE-2.0
9+
10+
Unless required by applicable law or agreed to in writing, software
11+
distributed under the License is distributed on an "AS IS" BASIS,
12+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
See the License for the specific language governing permissions and
14+
limitations under the License.
15+
*/
16+
17+
package v1beta1
18+
19+
// SetSparkApplicationDefaults sets default values for certain fields of a SparkApplication.
20+
func SetSparkApplicationDefaults(app *SparkApplication) {
21+
if app == nil {
22+
return
23+
}
24+
25+
if app.Spec.Mode == "" {
26+
app.Spec.Mode = ClusterMode
27+
}
28+
29+
if app.Spec.RestartPolicy.Type == "" {
30+
app.Spec.RestartPolicy.Type = Never
31+
}
32+
33+
if app.Spec.RestartPolicy.Type != Never {
34+
// Default to 5 sec if the RestartPolicy is OnFailure or Always and these values aren't specified.
35+
if app.Spec.RestartPolicy.OnFailureRetryInterval == nil {
36+
app.Spec.RestartPolicy.OnFailureRetryInterval = new(int64)
37+
*app.Spec.RestartPolicy.OnFailureRetryInterval = 5
38+
}
39+
40+
if app.Spec.RestartPolicy.OnSubmissionFailureRetryInterval == nil {
41+
app.Spec.RestartPolicy.OnSubmissionFailureRetryInterval = new(int64)
42+
*app.Spec.RestartPolicy.OnSubmissionFailureRetryInterval = 5
43+
}
44+
}
45+
46+
setDriverSpecDefaults(app.Spec.Driver)
47+
setExecutorSpecDefaults(app.Spec.Executor)
48+
}
49+
50+
func setDriverSpecDefaults(spec DriverSpec) {
51+
if spec.Cores == nil {
52+
spec.Cores = new(float32)
53+
*spec.Cores = 1
54+
}
55+
if spec.Memory == nil {
56+
spec.Memory = new(string)
57+
*spec.Memory = "1g"
58+
}
59+
}
60+
61+
func setExecutorSpecDefaults(spec ExecutorSpec) {
62+
if spec.Cores == nil {
63+
spec.Cores = new(float32)
64+
*spec.Cores = 1
65+
}
66+
if spec.Memory == nil {
67+
spec.Memory = new(string)
68+
*spec.Memory = "1g"
69+
}
70+
if spec.Instances == nil {
71+
spec.Instances = new(int32)
72+
*spec.Instances = 1
73+
}
74+
}

pkg/apis/sparkoperator.k8s.io/v1beta1/types.go

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -256,6 +256,7 @@ const (
256256
InvalidatingState ApplicationStateType = "INVALIDATING"
257257
SucceedingState ApplicationStateType = "SUCCEEDING"
258258
FailingState ApplicationStateType = "FAILING"
259+
UnknownState ApplicationStateType = "UNKNOWN"
259260
)
260261

261262
// ApplicationState tells the current state of the application and an error message in case of failures.

pkg/config/config.go

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ package config
1919
import (
2020
"fmt"
2121

22-
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1alpha1"
22+
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1beta1"
2323
)
2424

2525
// GetDriverAnnotationOption returns a spark-submit option for a driver annotation of the given key and value.
@@ -33,7 +33,7 @@ func GetExecutorAnnotationOption(key string, value string) string {
3333
}
3434

3535
// GetDriverEnvVarConfOptions returns a list of spark-submit options for setting driver environment variables.
36-
func GetDriverEnvVarConfOptions(app *v1alpha1.SparkApplication) []string {
36+
func GetDriverEnvVarConfOptions(app *v1beta1.SparkApplication) []string {
3737
var envVarConfOptions []string
3838
for key, value := range app.Spec.Driver.EnvVars {
3939
envVar := fmt.Sprintf("%s%s=%s", SparkDriverEnvVarConfigKeyPrefix, key, value)
@@ -43,7 +43,7 @@ func GetDriverEnvVarConfOptions(app *v1alpha1.SparkApplication) []string {
4343
}
4444

4545
// GetExecutorEnvVarConfOptions returns a list of spark-submit options for setting executor environment variables.
46-
func GetExecutorEnvVarConfOptions(app *v1alpha1.SparkApplication) []string {
46+
func GetExecutorEnvVarConfOptions(app *v1beta1.SparkApplication) []string {
4747
var envVarConfOptions []string
4848
for key, value := range app.Spec.Executor.EnvVars {
4949
envVar := fmt.Sprintf("%s%s=%s", SparkExecutorEnvVarConfigKeyPrefix, key, value)

pkg/config/config_test.go

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -22,14 +22,14 @@ import (
2222

2323
"github.com/stretchr/testify/assert"
2424

25-
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1alpha1"
25+
"github.com/GoogleCloudPlatform/spark-on-k8s-operator/pkg/apis/sparkoperator.k8s.io/v1beta1"
2626
)
2727

2828
func TestGetDriverEnvVarConfOptions(t *testing.T) {
29-
app := &v1alpha1.SparkApplication{
30-
Spec: v1alpha1.SparkApplicationSpec{
31-
Driver: v1alpha1.DriverSpec{
32-
SparkPodSpec: v1alpha1.SparkPodSpec{
29+
app := &v1beta1.SparkApplication{
30+
Spec: v1beta1.SparkApplicationSpec{
31+
Driver: v1beta1.DriverSpec{
32+
SparkPodSpec: v1beta1.SparkPodSpec{
3333
EnvVars: map[string]string{
3434
"ENV1": "VALUE1",
3535
"ENV2": "VALUE2",
@@ -50,10 +50,10 @@ func TestGetDriverEnvVarConfOptions(t *testing.T) {
5050
}
5151

5252
func TestGetExecutorEnvVarConfOptions(t *testing.T) {
53-
app := &v1alpha1.SparkApplication{
54-
Spec: v1alpha1.SparkApplicationSpec{
55-
Executor: v1alpha1.ExecutorSpec{
56-
SparkPodSpec: v1alpha1.SparkPodSpec{
53+
app := &v1beta1.SparkApplication{
54+
Spec: v1beta1.SparkApplicationSpec{
55+
Executor: v1beta1.ExecutorSpec{
56+
SparkPodSpec: v1beta1.SparkPodSpec{
5757
EnvVars: map[string]string{
5858
"ENV1": "VALUE1",
5959
"ENV2": "VALUE2",

0 commit comments

Comments
 (0)