title | weight | type | aliases | ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Installing Knative |
6 |
docs |
|
This guide walks you through the installation of the latest version of Knative. Note if you are upgrading an existing installation, follow the instructions here.
Knative has two components, which can be installed and used independently or together. To help you pick and choose the pieces that are right for you, here is a brief description of each:
Knative also has an Observability plugin {{< feature-state version="v0.14" state="deprecated" short=true >}} which provides standard tooling that can be used to get visibility into the health of the software running on Knative.
This guide assumes that you want to install an upstream Knative release on a Kubernetes cluster. A growing number of vendors have managed Knative offerings; see the Knative Offerings page for a full list.
Knative {{< version >}} requires a Kubernetes cluster v1.16 or newer, as well as
a compatible kubectl
. This guide assumes that you've already created a
Kubernetes cluster, and that you are using bash in a Mac or Linux environment;
some commands will need to be adjusted for use in a Windows environment.
{{< feature-state version="v0.9" state="stable" >}}
The following commands install the Knative Serving component.
Install the Custom Resource Definitions (aka CRDs):
kubectl apply --filename {{< artifact repo="serving" file="serving-crds.yaml" >}}
Install the core components of Serving (see below for optional extensions):
kubectl apply --filename {{< artifact repo="serving" file="serving-core.yaml" >}}
Pick a networking layer (alphabetical):
{{< tabs name="serving_networking" default="Istio" >}} {{% tab name="Ambassador" %}}
{{% feature-state version="v0.8" state="alpha" %}}
The following commands install Ambassador and enable its Knative integration.
Create a namespace to install Ambassador in:
kubectl create namespace ambassador
Install Ambassador:
kubectl apply --namespace ambassador \
--filename https://getambassador.io/yaml/ambassador/ambassador-crds.yaml \
--filename https://getambassador.io/yaml/ambassador/ambassador-rbac.yaml \
--filename https://getambassador.io/yaml/ambassador/ambassador-service.yaml
Give Ambassador the required permissions:
kubectl patch clusterrolebinding ambassador -p '{"subjects":[{"kind": "ServiceAccount", "name": "ambassador", "namespace": "ambassador"}]}'
Enable Knative support in Ambassador:
kubectl set env --namespace ambassador deployments/ambassador AMBASSADOR_KNATIVE_SUPPORT=true
To configure Knative Serving to use Ambassador by default:
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress.class":"ambassador.ingress.networking.knative.dev"}}'
Fetch the External IP or CNAME:
kubectl --namespace ambassador get service ambassador
Save this for configuring DNS below.
{{< /tab >}}
{{% tab name="Contour" %}}
{{% feature-state version="v0.17" state="beta" %}}
The following commands install Contour and enable its Knative integration.
Install a properly configured Contour:
kubectl apply --filename {{< artifact repo="net-contour" file="contour.yaml" >}}
Install the Knative Contour controller:
kubectl apply --filename {{< artifact repo="net-contour" file="net-contour.yaml" >}}
To configure Knative Serving to use Contour by default:
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress.class":"contour.ingress.networking.knative.dev"}}'
Fetch the External IP or CNAME:
kubectl --namespace contour-external get service envoy
Save this for configuring DNS below.
{{< /tab >}}
{{% tab name="Gloo" %}}
{{% feature-state version="v0.8" state="alpha" %}}
For a detailed guide on Gloo integration, see Installing Gloo for Knative in the Gloo documentation.
The following commands install Gloo and enable its Knative integration.
Make sure glooctl
is installed (version 1.3.x and higher recommended):
glooctl version
If it is not installed, you can install the latest version using:
curl -sL https://run.solo.io/gloo/install | sh
export PATH=$HOME/.gloo/bin:$PATH
Or following the Gloo CLI install instructions.
Install Gloo and the Knative integration:
glooctl install knative --install-knative=false
Fetch the External IP or CNAME:
glooctl proxy url --name knative-external-proxy
Save this for configuring DNS below.
{{< /tab >}}
{{% tab name="Istio" %}}
{{% feature-state version="v0.9" state="stable" %}}
The following commands install Istio and enable its Knative integration.
Install the Knative Istio controller:
kubectl apply --filename {{< artifact repo="net-istio" file="release.yaml" >}}
Fetch the External IP or CNAME:
kubectl --namespace istio-system get service istio-ingressgateway
Save this for configuring DNS below.
{{< /tab >}}
{{% tab name="Kong" %}}
{{% feature-state version="v0.13" state="" %}}
The following commands install Kong and enable its Knative integration.
Install Kong Ingress Controller:
kubectl apply --filename https://raw.githubusercontent.com/Kong/kubernetes-ingress-controller/0.9.x/deploy/single/all-in-one-dbless.yaml
To configure Knative Serving to use Kong by default:
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress.class":"kong"}}'
Fetch the External IP or CNAME:
kubectl --namespace kong get service kong-proxy
Save this for configuring DNS below.
{{< /tab >}}
{{% tab name="Kourier" %}}
{{% feature-state version="v0.17" state="beta" %}}
The following commands install Kourier and enable its Knative integration.
Install the Knative Kourier controller:
kubectl apply --filename {{< artifact repo="net-kourier" file="kourier.yaml" >}}
To configure Knative Serving to use Kourier by default:
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'
Fetch the External IP or CNAME:
kubectl --namespace kourier-system get service kourier
Save this for configuring DNS below.
{{< /tab >}} {{< /tabs >}}
Configure DNS
{{< tabs name="serving_dns" >}} {{% tab name="Magic DNS (xip.io)" %}} We ship a simple Kubernetes Job called "default domain" that will (see caveats) configure Knative Serving to use xip.io as the default DNS suffix.
kubectl apply --filename {{< artifact repo="serving" file="serving-default-domain.yaml" >}}
Caveat: This will only work if the cluster LoadBalancer service exposes an IPv4 address or hostname, so it will not work with IPv6 clusters or local setups like Minikube. For these, see "Real DNS" or "Temporary DNS". {{< /tab >}}
{{% tab name="Real DNS" %}} To configure DNS for Knative, take the External IP or CNAME from setting up networking, and configure it with your DNS provider as follows:
If the networking layer produced an External IP address, then configure a
wildcard A
record for the domain:
# Here knative.example.com is the domain suffix for your cluster
*.knative.example.com == A 35.233.41.212
If the networking layer produced a CNAME, then configure a CNAME record for the domain:
# Here knative.example.com is the domain suffix for your cluster
*.knative.example.com == CNAME a317a278525d111e89f272a164fd35fb-1510370581.eu-central-1.elb.amazonaws.com
Once your DNS provider has been configured, direct Knative to use that domain:
# Replace knative.example.com with your domain suffix
kubectl patch configmap/config-domain \
--namespace knative-serving \
--type merge \
--patch '{"data":{"knative.example.com":""}}'
{{< /tab >}}
{{% tab name="Temporary DNS" %}} If you are using curl
to access the sample
applications, or your own Knative app, and are unable to use the "Magic DNS
(xip.io)" or "Real DNS" methods, there is a temporary approach. This is useful
for those who wish to evaluate Knative without altering their DNS configuration,
as per the "Real DNS" method, or cannot use the "Magic DNS" method due to using,
for example, minikube locally or IPv6 clusters.
To access your application using curl
using this method:
After starting your application, get the URL of your application:
kubectl get ksvc
The output should be similar to:
NAME URL LATESTCREATED LATESTREADY READY REASON
helloworld-go http://helloworld-go.default.example.com helloworld-go-vqjlf helloworld-go-vqjlf True
Instruct curl
to connect to the External IP or CNAME defined by the
networking layer in section 3 above, and use the -H "Host:"
command-line
option to specify the Knative application's host name. For example, if the
networking layer defines your External IP and port to be
http://192.168.39.228:32198
and you wish to access the above
helloworld-go
application, use:
curl -H "Host: helloworld-go.default.example.com" http://192.168.39.228:32198
In the case of the provided helloworld-go
sample application, the output
should, using the default configuration, be:
Hello Go Sample v1!
Refer to the "Real DNS" method for a permanent solution.
{{< /tab >}}
{{< /tabs >}}
Monitor the Knative components until all of the components show a STATUS
of
Running
or Completed
:
kubectl get pods --namespace knative-serving
At this point, you have a basic installation of Knative Serving!
{{< tabs name="serving_extensions" >}} {{% tab name="HPA autoscaling" %}}
{{% feature-state version="v0.8" state="beta" %}}
Knative also supports the use of the Kubernetes Horizontal Pod Autoscaler (HPA) for driving autoscaling decisions. The following command will install the components needed to support HPA-class autoscaling:
kubectl apply --filename {{< artifact repo="serving" file="serving-hpa.yaml" >}}
{{< /tab >}}
{{% tab name="TLS with cert-manager" %}}
{{% feature-state version="v0.6" state="alpha" %}}
Knative supports automatically provisioning TLS certificates via cert-manager. The following commands will install the components needed to support the provisioning of TLS certificates via cert-manager.
First, install
cert-manager version 0.12.0
or higher
Next, install the component that integrates Knative with cert-manager:
kubectl apply --filename {{< artifact repo="net-certmanager" file="release.yaml" >}}
Now configure Knative to automatically configure TLS certificates. {{< /tab >}}
{{% tab name="TLS via HTTP01" %}}
{{% feature-state version="v0.14" state="alpha" %}}
Knative supports automatically provisioning TLS certificates using Let's Encrypt HTTP01 challenges. The following commands will install the components needed to support that.
First, install the net-http01
controller:
kubectl apply --filename {{< artifact repo="net-http01" file="release.yaml" >}}
Next, configure the certificate.class
to use this certificate type.
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"certificate.class":"net-http01.certificate.networking.knative.dev"}}'
Lastly, enable auto-TLS.
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"autoTLS":"Enabled"}}'
{{< /tab >}}
{{% tab name="TLS wildcard support" %}}
{{% feature-state version="v0.12" state="alpha" %}}
If you are using a Certificate implementation that supports provisioning wildcard certificates (e.g. cert-manager with a DNS01 issuer), then the most efficient way to provision certificates is with the namespace wildcard certificate controller. The following command will install the components needed to provision wildcard certificates in each namespace:
kubectl apply --filename {{< artifact repo="serving" file="serving-nscert.yaml" >}}
Note this will not work with HTTP01 either via cert-manager or the net-http01 options.
{{< /tab >}} {{< /tabs >}}
Deploy your first app with the getting started with Knative app deployment guide. You can also find a number of samples for Knative Serving here.
{{< feature-state version="v0.16" state="stable" >}}
The following commands install the Knative Eventing component.
Install the Custom Resource Definitions (aka CRDs):
kubectl apply --filename {{< artifact repo="eventing" file="eventing-crds.yaml" >}}
Install the core components of Eventing (see below for optional extensions):
kubectl apply --filename {{< artifact repo="eventing" file="eventing-core.yaml" >}}
Note: If your Kubernetes cluster comes with pre-installed Istio, make sure
it has cluster-local-gateway
deployed.
Depending on which Istio version you have, you'd need to apply the
istio-knative-extras.yaml
in the corresponding version folder at
[here](https://github.com/knative/serving/tree/{{< branch >}}/third_party).
Install a default Channel (messaging) layer (alphabetical).
{{< tabs name="eventing_channels" default="In-Memory (standalone)" >}} {{% tab name="Apache Kafka Channel" %}}
Then install the Apache Kafka Channel:
curl -L "{{< artifact repo="eventing-contrib" file="kafka-channel.yaml" >}}" \
| sed 's/REPLACE_WITH_CLUSTER_URL/my-cluster-kafka-bootstrap.kafka:9092/' \
| kubectl apply --filename -
To learn more about the Apache Kafka channel, try our sample
{{< /tab >}}
{{% tab name="Google Cloud Pub/Sub Channel" %}}
Install the Google Cloud Pub/Sub Channel:
# This installs both the Channel and the GCP Sources.
kubectl apply --filename {{< artifact org="google" repo="knative-gcp" file="cloud-run-events.yaml" >}}
To learn more about the Google Cloud Pub/Sub Channel, try our sample
{{< /tab >}}
{{% tab name="In-Memory (standalone)" %}}
{{< feature-state version="v0.16" state="stable" >}}
The following command installs an implementation of Channel that runs in-memory. This implementation is nice because it is simple and standalone, but it is unsuitable for production use cases.
kubectl apply --filename {{< artifact repo="eventing" file="in-memory-channel.yaml" >}}
{{< /tab >}}
{{% tab name="NATS Channel" %}}
First, [Install NATS Streaming for Kubernetes](https://github.com/knative/eventing-contrib/blob/{{< version >}}/natss/config/broker/README.md)
Then install the NATS Streaming Channel:
kubectl apply --filename {{< artifact repo="eventing-contrib" file="natss-channel.yaml" >}}
{{< /tab >}}
{{< /tabs >}}
Install a Broker (eventing) layer:
{{< tabs name="eventing_brokers" default="MT-Channel-based" >}}
{{% tab name="Apache Kafka Broker" %}}
{{< feature-state version="v0.17" state="alpha" >}}
The following command installs the Apache Kafka broker, and runs event routing in a system namespace,
knative-eventing
, by default.
kubectl apply --filename {{< artifact org="knative-sandbox" repo="eventing-kafka-broker" file="eventing-kafka-broker.yaml" >}}
For more information, see the Kafka Broker documentation. {{< /tab >}}
{{% tab name="MT-Channel-based" %}} {{< feature-state version="v0.16" state="stable" >}}
The following command installs an implementation of Broker that utilizes Channels and runs event routing components in a System Namespace, providing a smaller and simpler installation.
kubectl apply --filename {{< artifact repo="eventing" file="mt-channel-broker.yaml" >}}
To customize which broker channel implementation is used, update the following ConfigMap to specify which configurations are used for which namespaces:
apiVersion: v1
kind: ConfigMap
metadata:
name: config-br-defaults
namespace: knative-eventing
data:
default-br-config: |
# This is the cluster-wide default broker channel.
clusterDefault:
brokerClass: MTChannelBasedBroker
apiVersion: v1
kind: ConfigMap
name: imc-channel
namespace: knative-eventing
# This allows you to specify different defaults per-namespace,
# in this case the "some-namespace" namespace will use the Kafka
# channel ConfigMap by default (only for example, you will need
# to install kafka also to make use of this).
namespaceDefaults:
some-namespace:
brokerClass: MTChannelBasedBroker
apiVersion: v1
kind: ConfigMap
name: kafka-channel
namespace: knative-eventing
The referenced imc-channel
and kafka-channel
example ConfigMaps would look
like:
apiVersion: v1
kind: ConfigMap
metadata:
name: imc-channel
namespace: knative-eventing
data:
channelTemplateSpec: |
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
---
apiVersion: v1
kind: ConfigMap
metadata:
name: kafka-channel
namespace: knative-eventing
data:
channelTemplateSpec: |
apiVersion: messaging.knative.dev/v1alpha1
kind: KafkaChannel
spec:
numPartitions: 3
replicationFactor: 1
In order to use the KafkaChannel make sure it is installed on the cluster as discussed above.
{{< /tab >}}
{{< /tabs >}}
Monitor the Knative components until all of the components show a STATUS
of
Running
:
kubectl get pods --namespace knative-eventing
At this point, you have a basic installation of Knative Eventing!
{{< tabs name="eventing_extensions" >}} {{% tab name="Sugar Controller" %}}
{{< feature-state version="v0.16" state="alpha" >}}
The following command installs the Eventing Sugar Controller:
kubectl apply --filename {{< artifact repo="eventing" file="eventing-sugar-controller.yaml" >}}
The Knative Eventing Sugar Controller will react to special labels and annotations and produce Eventing resources. For example:
eventing.knative.dev/injection=enabled
, the
controller will create a default broker in that namespace.eventing.knative.dev/injection=enabled
, the
controller will create a Broker named by that Trigger in the Trigger's
Namespace.The following command enables the default Broker on a namespace (here
default
):
kubectl label namespace default eventing.knative.dev/injection=enabled
{{< /tab >}}
{{% tab name="Github Source" %}}
{{< feature-state version="v0.2" state="alpha" >}}
The following command installs the single-tenant Github source:
kubectl apply --filename {{< artifact repo="eventing-contrib" file="github.yaml" >}}
The single-tenant GitHub source creates one Knative service per GitHub source.
The following command installs the multi-tenant GitHub source:
kubectl apply --filename {{< artifact repo="eventing-contrib" file="mt-github.yaml" >}}
The multi-tenant GitHub source creates only one Knative service handling all GitHub sources in the cluster. This source does not support logging or tracing configuration yet.
To learn more about the Github source, try our sample
{{< /tab >}}
{{% tab name="Apache Camel-K Source" %}} {{< feature-state version="v0.5" state="alpha" >}}
The following command installs the Apache Camel-K Source:
kubectl apply --filename {{< artifact repo="eventing-contrib" file="camel.yaml" >}}
To learn more about the Apache Camel-K source, try our sample
{{< /tab >}}
{{% tab name="Apache Kafka Source" %}}
{{< feature-state version="v0.5" state="alpha" >}}
The following command installs the Apache Kafka Source:
kubectl apply --filename {{< artifact repo="eventing-contrib" file="kafka-source.yaml" >}}
To learn more about the Apache Kafka source, try our sample
{{< /tab >}}
{{% tab name="GCP Sources" %}}
{{< feature-state version="v0.2" state="alpha" >}}
The following command installs the GCP Sources:
# This installs both the Sources and the Channel.
kubectl apply --filename {{< artifact org="google" repo="knative-gcp" file="cloud-run-events.yaml" >}}
To learn more about the Cloud Pub/Sub source, try our sample.
To learn more about the Cloud Storage source, try our sample.
To learn more about the Cloud Scheduler source, try our sample.
To learn more about the Cloud Audit Logs source, try our sample.
{{< /tab >}}
{{% tab name="Apache CouchDB Source" %}}
{{< feature-state version="v0.10" state="alpha" >}}
The following command installs the Apache CouchDB Source:
kubectl apply --filename {{< artifact repo="eventing-contrib" file="couchdb.yaml" >}}
To learn more about the Apache CouchDB source, read [our documentation]((https://github.com/knative/eventing-contrib/blob/{{< version >}}/couchdb/README.md)
{{< /tab >}}
{{% tab name="VMware Sources and Bindings" %}}
{{< feature-state version="v0.14" state="alpha" >}}
The following command installs the VMware Sources and Bindings:
kubectl apply --filename {{< artifact org="vmware-tanzu" repo="sources-for-knative" file="release.yaml" >}}
To learn more about the VMware sources and bindings, try our samples.
{{< /tab >}}
{{< /tabs >}}
You can find a number of samples for Knative Eventing here. A quick-start guide is available here.
{{< feature-state version="v0.14" state="deprecated" >}}
Install the following observability features to enable logging, metrics, and request tracing in your Serving and Eventing components.
All observibility plugins require that you first install the core:
kubectl apply --filename {{< artifact repo="serving" file="monitoring-core.yaml" >}}
After the core is installed, you can choose to install one or all of the following observability plugins:
Install Prometheus and Grafana for metrics:
kubectl apply --filename {{< artifact repo="serving" file="monitoring-metrics-prometheus.yaml" >}}
Install the ELK stack (Elasticsearch, Logstash and Kibana) for logs:
kubectl apply --filename {{< artifact repo="serving" file="monitoring-logs-elasticsearch.yaml" >}}
Install Jaeger for distributed tracing
{{< tabs name="jaeger" default="In-Memory (standalone)" >}} {{% tab name="In-Memory (standalone)" %}} To install the in-memory (standalone) version of Jaeger, run the following command:
kubectl apply --filename {{< artifact repo="serving" file="monitoring-tracing-jaeger-in-mem.yaml" >}}
{{< /tab >}}
{{% tab name="ELK stack" %}} To install the ELK version of Jaeger (needs the ELK install above), run the following command:
kubectl apply --filename {{< artifact repo="serving" file="monitoring-tracing-jaeger.yaml" >}}
{{< /tab >}} {{< /tabs >}}
Install Zipkin for distributed tracing
{{< tabs name="zipkin" default="In-Memory (standalone)" >}} {{% tab name="In-Memory (standalone)" %}} To install the in-memory (standalone) version of Zipkin, run the following command:
kubectl apply --filename {{< artifact repo="serving" file="monitoring-tracing-zipkin-in-mem.yaml" >}}
{{< /tab >}}
{{% tab name="ELK stack" %}} To install the ELK version of Zipkin (needs the ELK install above), run the following command:
kubectl apply --filename {{< artifact repo="serving" file="monitoring-tracing-zipkin.yaml" >}}
{{< /tab >}} {{< /tabs >}}
Вы можете оставить комментарий после Вход в систему
Неприемлемый контент может быть отображен здесь и не будет показан на странице. Вы можете проверить и изменить его с помощью соответствующей функции редактирования.
Если вы подтверждаете, что содержание не содержит непристойной лексики/перенаправления на рекламу/насилия/вульгарной порнографии/нарушений/пиратства/ложного/незначительного или незаконного контента, связанного с национальными законами и предписаниями, вы можете нажать «Отправить» для подачи апелляции, и мы обработаем ее как можно скорее.
Опубликовать ( 0 )