First, you may want to Ramp up on Kubernetes and Custom Resource Definitions (CRDs) as Tekton implements several Kubernetes resource controllers configured by Tekton CRDs. Then follow these steps to start developing and contributing project code:
ko
in KubernetesWelcome to the project! :clap::clap::clap: You may find these resources helpful to "ramp up" on some of the technologies this project builds and runs on.
This project extends Kubernetes (aka k8s
) with Custom Resource Definitions (CRDs).
To learn about how this works, check out our developer documentation.
After reading the developer docs, you may find it useful to return to these Tekton Pipeline
docs:
Tasks
and Pipelines
(i.e., Tekton CRDs), and see what happens when they are runGitHub is used for project Source Code Management (SCM) using the SSH protocol for authentication.
You must install these tools:
git
: For source control
pre-commit
: pre-commit generates and runs locally a few checks (as git-hooks) to ensure the pushed code is valid. All checks are performed prior to git push
command.
# After install step run pre-commit binary at the root directory in order to install git hooks.
pre-commit install
# Run the hooks against all of the files
pre-commit run --all-files
go
: The language Tekton Pipelines is
built in.
Note Golang version v1.15 or higher is recommended.
ko
: The Tekton project uses ko
to simplify the building of its container images from go
source, push these images to the configured image repository and deploy these images into Kubernetes clusters.
Note
ko
version v0.5.1 or higher is required forpipeline
to work correctly.
kubectl
: For
interacting with your Kubernetes cluster
The user interacting with your K8s cluster must be a cluster admin to create role bindings.
Google Cloud Platform (GCP) example:
# Using gcloud to get your current user
USER=$(gcloud config get-value core/account)
# Make that user a cluster admin
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user="${USER}"
bash
v4 or higher: For scripts used to
generate code and update dependencies. On MacOS the default bash is too old,
you can use Homebrew to install a later version.
go-licenses
is used in e2e tests.
(Optional)
yamllint
is run against every PR as part of pre-commit
. You may want to install this tool
so that pre-commit
can use it, otherwise it will show a failed
message for
when linting yaml files.
(Optional)
golangci-lint
is run against every PR. You may want to install and run this tool
locally to iterate quickly on
linter issues.
Note Linter findings are dependent on your installed Go version. Match the version in
go.mod
to match the findings in your PR.
(Optional)
woke
is executed for every pull
request. To ensure your work does not contain offensive language, you may
want to install and run this tool locally.
To build, deploy and run your Tekton Objects with ko
, you'll need to set these environment variables:
GOROOT
: Set GOROOT
to the location of the Go installation you want ko
to use for builds.
Note: You may need to set
GOROOT
if you installed Go tools to a non-default location or have multiple Go versions installed.
If it is not set, ko
infers the location by effectively using go env GOROOT
.
KO_DOCKER_REPO
: The docker repository to which developer images should be pushed.
For example:
Using Google Container Registry (GCR):
# format: `gcr.io/${GCP-PROJECT-NAME}`
export KO_DOCKER_REPO='gcr.io/my-gcloud-project-name'
Using Docker Desktop (Docker Hub):
# format: 'docker.io/${DOCKER_HUB_USERNAME}'
export KO_DOCKER_REPO='docker.io/my-dockerhub-username'
You can also host your own Docker Registry server and reference it:
# format: ${localhost:port}/{}
export KO_DOCKER_REPO='localhost:5000/mypipelineimages'
Optionally, add $HOME/go/bin
to your system PATH
so that any tooling installed via go get
will work properly. For example:
export PATH="${PATH}:$HOME/go/bin"
Note: It is recommended to add these environment variables to your shell's configuration files (e.g.,
~/.bash_profile
or~/.bashrc
).
The Tekton project requires that you develop (commit) code changes to branches that belong to a fork of the tektoncd/pipeline
repository in your GitHub account before submitting them as Pull Requests (PRs) to the actual project repository.
Create a fork of the tektoncd/pipeline
repository in your GitHub account.
Create a clone of your fork on your local machine:
git clone git@github.com:${YOUR_GITHUB_USERNAME}/pipeline.git
Note: Tekton uses Go Modules (i.e.,
go mod
) for package management so you may clone the repository to a location of your choosing.
Configure git
remote repositories
Adding tektoncd/pipelines
as the upstream
and your fork as the origin
remote repositories to your .git/config
sets you up nicely for regularly syncing your fork and submitting pull requests.
Change into the project directory
cd pipeline
Configure Tekton as the upstream
repository
git remote add upstream git@github.com:tektoncd/pipeline.git
Optional: Prevent accidental pushing of commits by changing the upstream URL to no_push
git remote set-url --push upstream no_push
Configure your fork as the origin
repository
git remote add origin git@github.com:${YOUR_GITHUB_USERNAME}/pipeline.git
Depending on your chosen container registry that you set in the KO_DOCKER_REPO
environment variable, you may need to additionally configure access control to allow ko
to authenticate to it.
Docker Desktop provides seamless integration with both a local (default) image registry as well as Docker Hub remote registries. To use Docker Hub registries with ko
, all you need do is to configure Docker Desktop with your Docker ID and password in its dashboard.
If using GCR with ko
, make sure to configure
authentication
for your KO_DOCKER_REPO
if required. To be able to push images to
gcr.io/<project>
, you need to run this once:
gcloud auth configure-docker
To be able to pull images from gcr.io/<project>
, please follow the instructions here to configure IAM policies for the services that will pull iamges from your GCR.
If you choose to run GKE and GCR in the same GCP project, please follow the example GKE setup and make sure to add storage-full
to the --scopes
args in the example to give the GKE default service account full access to your GCR. Alternatively, you can grant the GKE default service account read access to your GCR by running:
gcloud projects add-iam-policy-binding <project-number> \
--member='serviceAccount:<project-number>-compute@developer.gserviceaccount.com' \
--role='roles/storage.objectViewer'
For more information about GCP Compute Engine default service accounts, please check here
After configuring IAM policy of your GCR, the example GKE setup in this guide now has permissions to push and pull images from your GCR. If you choose to use a different setup with fewer default permissions, or your GKE cluster that will run Tekton is in a different project than your GCR registry, you will need to provide the Tekton pipelines controller and webhook service accounts with GCR credentials. See documentation on using GCR with GKE for more information. To do this, create a secret for your docker credentials and reference this secret from the controller and webhook service accounts, as follows.
Create a secret, for example:
kubectl create secret generic ${SECRET_NAME} \
--from-file=.dockerconfigjson=<path/to/.docker/config.json> \
--type=kubernetes.io/dockerconfigjson \
--namespace=tekton-pipelines
See Configuring authentication for Docker for more detailed information on creating secrets containing registry credentials.
Update the tekton-pipelines-controller
and tekton-pipelines-webhook
service accounts
to reference the newly created secret by modifying
the definitions of these service accounts
as shown below.
apiVersion: v1
kind: ServiceAccount
metadata:
name: tekton-pipelines-controller
namespace: tekton-pipelines
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
imagePullSecrets:
- name: ${SECRET_NAME}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: tekton-pipelines-webhook
namespace: tekton-pipelines
labels:
app.kubernetes.io/component: webhook
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
imagePullSecrets:
- name: ${SECRET_NAME}
The recommended minimum development configuration is:
Kind is a great tool for working with Kubernetes clusters locally. It is particularly useful to quickly test code against different cluster configurations.
Install required tools (note: may require a newer version of Go).
Install Docker.
Create cluster:
kind create cluster
Configure ko:
export KO_DOCKER_REPO="kind.local"
export KIND_CLUSTER_NAME="kind" # only needed if you used a custom name in the previous step
optional: As a convenience, the Tekton plumbing project provides a script named 'tekton_in_kind.sh' that leverages kind
to create a cluster and install Tekton Pipeline, Tekton Triggers and Tekton Dashboard components into it.
If you used the 'tekton_in_kind.sh' plumbing script to deploy your kind
cluster, you need to tell ko
to use the local registry as mentioned here.
export KO_DOCKER_REPO="localhost:5000"
PROJECT_ID
).Create a GKE cluster (with --cluster-version=latest
but you can use any
version 1.18 or later):
export PROJECT_ID=my-gcp-project
export CLUSTER_NAME=mycoolcluster
gcloud container clusters create $CLUSTER_NAME \
--enable-autoscaling \
--min-nodes=1 \
--max-nodes=3 \
--scopes=cloud-platform \
--no-issue-client-certificate \
--project=$PROJECT_ID \
--region=us-central1 \
--machine-type=e2-standard-4 \
--num-nodes=1 \
--cluster-version=1.28
Note: The recommended GCE machine type is
'e2-standard-4'
.
Note: The
'--scopes'
argument on the'gcloud container cluster create'
command controls what GCP resources the cluster's default service account has access to; for example, to give the default service account full access to your GCR registry, you can add'storage-full'
to the--scopes
arg. See Authenticating to GCP for more details.
Grant cluster-admin permissions to the current user:
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole=cluster-admin \
--user=$(gcloud config get-value core/account)
While iterating on code changes to the project, you may need to:
./hack/update-deps.sh
./hack/update-codegen.sh
./hack/update-openapigen.sh
To make changes to these CRDs, you will probably interact with:
./hack/update-codegen.sh
)ko
The ko
command is the preferred method to manage (i.e., create, modify or delete) Tekton Objects in Kubernetes from your local fork of the project. Some common operations include:
You can stand up a version of Tekton using your local clone's code to the currently configured K8s context (i.e., kubectl config current-context
):
ko apply -R -f config/
You can verify your development installation using ko
was successful by checking to see if the Tekton pipeline pods are running in Kubernetes:
kubectl get pods -n tekton-pipelines
You can clean up everything with:
# If you should not delete the namespace of a pipeline component
ko delete -f config/
# If you also can delete the namespace of a pipeline component
ko delete -R -f config/
Note: If you use a pipeline component in the same namespace as other components such as dashboard or triggers, executing ko delete -R -f config/
deletes these other components too.
As you make changes to the code, you can redeploy your controller with:
ko apply -f config/controller.yaml
When managing different development branches of code (with changed Tekton objects and controllers) in the same K8s instance, it may be helpful to install them into a custom (non-default) namespace. The ability to map a code branch to a corresponding namespace may make it easier to identify and manage the objects as a group as well as isolate log output.
To install into a different namespace you can use this script:
#!/usr/bin/env bash
set -e
# Set your target namespace here
TARGET_NAMESPACE=new-target-namespace
ko resolve -R -f config | sed -e '/kind: Namespace/!b;n;n;s/:.*/: '"${TARGET_NAMESPACE}"'/' | \
sed "s/namespace: tekton-pipelines$/namespace: ${TARGET_NAMESPACE}/" | \
kubectl apply -R -f-
kubectl set env deployments --all SYSTEM_NAMESPACE=${TARGET_NAMESPACE} -n ${TARGET_NAMESPACE}
This script will cause ko
to:
namespace
values in K8s configuration files within the config/
subdirectory to be updated to a name of your choosing.It will also update the default system namespace used for K8s deployments
to the new value for all subsequent kubectl
commands.
To look at the controller logs, run:
kubectl -n tekton-pipelines logs $(kubectl -n tekton-pipelines get pods -l app=tekton-pipelines-controller -o name)
To look at the webhook logs, run:
kubectl -n tekton-pipelines logs $(kubectl -n tekton-pipelines get pods -l app=tekton-pipelines-webhook -o name)
To look at the logs for individual TaskRuns
or PipelineRuns
, see
docs on accessing logs.
If you need to add a new CRD type, you will need to add:
Вы можете оставить комментарий после Вход в систему
Неприемлемый контент может быть отображен здесь и не будет показан на странице. Вы можете проверить и изменить его с помощью соответствующей функции редактирования.
Если вы подтверждаете, что содержание не содержит непристойной лексики/перенаправления на рекламу/насилия/вульгарной порнографии/нарушений/пиратства/ложного/незначительного или незаконного контента, связанного с национальными законами и предписаниями, вы можете нажать «Отправить» для подачи апелляции, и мы обработаем ее как можно скорее.
Опубликовать ( 0 )