knative

Compare original and translation side by side

🇺🇸

Original

English
🇨🇳

Translation

Chinese

Knative Skill

Knative 技能指南

Overview

概述

Knative is an open-source Kubernetes-based platform for deploying and managing serverless workloads. It provides three main components:
ComponentPurposeKey Features
ServingHTTP-triggered autoscaling runtimeScale-to-zero, traffic splitting, revisions
EventingEvent-driven architecturesBrokers, Triggers, Sources, CloudEvents
FunctionsSimplified function deployment
func
CLI, multi-language support
Current Version: v1.20.0 (as of late 2024)
Knative 是一个基于Kubernetes的开源平台,用于部署和管理无服务器工作负载。它提供三个核心组件:
组件用途核心特性
ServingHTTP触发的自动扩缩容运行时缩容至零、流量拆分、版本修订
Eventing事件驱动架构支持Brokers、Triggers、Sources、CloudEvents
Functions简化的函数部署
func
CLI、多语言支持
当前版本:v1.20.0(截至2024年末)

When to Use This Skill

适用场景

Use this skill when the user:
  • Wants to deploy serverless workloads on Kubernetes
  • Needs scale-to-zero autoscaling capabilities
  • Is implementing event-driven architectures
  • Needs traffic management (blue-green, canary, gradual rollout)
  • Works with CloudEvents or event routing
  • Mentions Knative Serving, Eventing, or Functions
  • Asks about Brokers, Triggers, Sources, or Sinks
  • Needs to configure networking layers (Kourier, Istio, Contour)
当用户有以下需求时,可使用本技能:
  • 希望在Kubernetes上部署无服务器工作负载
  • 需要缩容至零的自动扩缩容能力
  • 正在构建事件驱动架构
  • 需要流量管理(蓝绿发布、金丝雀发布、逐步灰度发布)
  • 涉及CloudEvents或事件路由操作
  • 提及Knative Serving、Eventing或Functions组件
  • 咨询Brokers、Triggers、Sources或Sinks的使用
  • 需要配置网络层(Kourier、Istio、Contour)

Quick Reference

快速参考

Knative Service (Serving)

Knative Service(Serving组件)

yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: my-service
  namespace: default
spec:
  template:
    metadata:
      annotations:
        # Autoscaling configuration
        autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
        autoscaling.knative.dev/min-scale: "0"
        autoscaling.knative.dev/max-scale: "10"
        autoscaling.knative.dev/target: "100"  # concurrent requests
    spec:
      containers:
        - image: gcr.io/my-project/my-app:latest
          ports:
            - containerPort: 8080
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              memory: 128Mi  # Memory limit = request (required)
          # NO CPU limits (causes throttling)
yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: my-service
  namespace: default
spec:
  template:
    metadata:
      annotations:
        # 自动扩缩容配置
        autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
        autoscaling.knative.dev/min-scale: "0"
        autoscaling.knative.dev/max-scale: "10"
        autoscaling.knative.dev/target: "100"  # 并发请求数
    spec:
      containers:
        - image: gcr.io/my-project/my-app:latest
          ports:
            - containerPort: 8080
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              memory: 128Mi  # 内存限制需与请求值一致(必填)
          # 禁止设置CPU限制(会导致请求节流)

Traffic Splitting

流量拆分

yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: my-service
spec:
  template:
    # ... container spec
  traffic:
    # Blue-green: 100% to new or old
    - revisionName: my-service-00001
      percent: 90
    - revisionName: my-service-00002
      percent: 10
    # Or use latestRevision
    - latestRevision: true
      percent: 100
yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: my-service
spec:
  template:
    # ... 容器配置
  traffic:
    # 蓝绿发布:100%流量指向新版本或旧版本
    - revisionName: my-service-00001
      percent: 90
    - revisionName: my-service-00002
      percent: 10
    # 或使用latestRevision
    - latestRevision: true
      percent: 100

Tagged Revisions (Preview URLs)

带标签的版本修订(预览URL)

yaml
traffic:
  - revisionName: my-service-00002
    percent: 0
    tag: staging  # Accessible at staging-my-service.example.com
  - latestRevision: true
    percent: 100
    tag: production
yaml
traffic:
  - revisionName: my-service-00002
    percent: 0
    tag: staging  # 可通过staging-my-service.example.com访问
  - latestRevision: true
    percent: 100
    tag: production

Broker and Trigger (Eventing)

Broker与Trigger(Eventing组件)

yaml
undefined
yaml
undefined

Create a Broker

创建Broker

apiVersion: eventing.knative.dev/v1 kind: Broker metadata: name: default namespace: default

apiVersion: eventing.knative.dev/v1 kind: Broker metadata: name: default namespace: default

Create a Trigger to route events

创建Trigger路由事件

apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: my-trigger namespace: default spec: broker: default filter: attributes: type: dev.knative.example source: /my/source subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: my-service
undefined
apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: my-trigger namespace: default spec: broker: default filter: attributes: type: dev.knative.example source: /my/source subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: my-service
undefined

Event Source Example (PingSource)

事件源示例(PingSource)

yaml
apiVersion: sources.knative.dev/v1
kind: PingSource
metadata:
  name: cron-job
spec:
  schedule: "*/1 * * * *"  # Every minute
  contentType: application/json
  data: '{"message": "Hello from cron"}'
  sink:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: event-display
yaml
apiVersion: sources.knative.dev/v1
kind: PingSource
metadata:
  name: cron-job
spec:
  schedule: "*/1 * * * *"  # 每分钟执行一次
  contentType: application/json
  data: '{"message": "Hello from cron"}'
  sink:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: event-display

Installation

安装步骤

Prerequisites

前置条件

  • Kubernetes cluster v1.28+
  • kubectl
    configured
  • Cluster admin permissions
  • Kubernetes集群版本v1.28+
  • 已配置
    kubectl
  • 集群管理员权限

Method 1: YAML Install (Recommended for GitOps)

方法1:YAML安装(GitOps推荐方式)

bash
undefined
bash
undefined

Install Knative Serving CRDs and Core

安装Knative Serving CRD及核心组件

Install Networking Layer (choose one)

安装网络层(选择其中一种)

Option A: Kourier (lightweight, recommended for most cases)

选项A:Kourier(轻量型,多数场景推荐)

kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.20.0/kourier.yaml kubectl patch configmap/config-network
--namespace knative-serving
--type merge
--patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'
kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.20.0/kourier.yaml kubectl patch configmap/config-network
--namespace knative-serving
--type merge
--patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'

Option B: Istio (for service mesh requirements)

选项B:Istio(适用于服务网格需求)

Install Knative Eventing

安装Knative Eventing

Install In-Memory Channel (dev only) or Kafka Channel (production)

安装内存通道(仅开发环境)或Kafka通道(生产环境)

Install MT-Channel-Based Broker

安装基于MT通道的Broker

Method 2: Knative Operator

方法2:Knative Operator

yaml
undefined
yaml
undefined

Install the Operator

安装Operator

Deploy Knative Serving

部署Knative Serving

apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: ingress: kourier: enabled: true config: network: ingress-class: kourier.ingress.networking.knative.dev

apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: ingress: kourier: enabled: true config: network: ingress-class: kourier.ingress.networking.knative.dev

Deploy Knative Eventing

部署Knative Eventing

apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing
undefined
apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing
undefined

Configure DNS

配置DNS

bash
undefined
bash
undefined

Get the External IP of the ingress

获取Ingress的外部IP

kubectl get svc kourier -n kourier-system
kubectl get svc kourier -n kourier-system

Option A: Real DNS (production)

选项A:真实DNS(生产环境)

Create A record: *.knative.example.com -> EXTERNAL-IP

创建A记录:*.knative.example.com -> EXTERNAL-IP

Option B: Magic DNS with sslip.io (development)

选项B:使用sslip.io的Magic DNS(开发环境)

kubectl patch configmap/config-domain
--namespace knative-serving
--type merge
--patch '{"data":{"<EXTERNAL-IP>.sslip.io":""}}'
kubectl patch configmap/config-domain
--namespace knative-serving
--type merge
--patch '{"data":{"<EXTERNAL-IP>.sslip.io":""}}'

Option C: Default domain (internal only)

选项C:默认域名(仅内部访问)

kubectl patch configmap/config-domain
--namespace knative-serving
--type merge
--patch '{"data":{"example.com":""}}'
undefined
kubectl patch configmap/config-domain
--namespace knative-serving
--type merge
--patch '{"data":{"example.com":""}}'
undefined

Autoscaling

自动扩缩容

Autoscaler Classes

自动扩缩容器类型

ClassKeyScale to ZeroMetrics
KPA (default)
kpa.autoscaling.knative.dev
YesConcurrency, RPS
HPA
hpa.autoscaling.knative.dev
NoCPU, Memory
类型标识支持缩容至零指标
KPA(默认)
kpa.autoscaling.knative.dev
并发数、每秒请求数(RPS)
HPA
hpa.autoscaling.knative.dev
CPU、内存

Autoscaling Annotations

自动扩缩容注解

yaml
metadata:
  annotations:
    # Autoscaler class
    autoscaling.knative.dev/class: kpa.autoscaling.knative.dev

    # Scale bounds
    autoscaling.knative.dev/min-scale: "1"    # Prevent scale-to-zero
    autoscaling.knative.dev/max-scale: "100"
    autoscaling.knative.dev/initial-scale: "3"

    # Scaling metric (KPA)
    autoscaling.knative.dev/metric: concurrency  # or rps
    autoscaling.knative.dev/target: "100"        # target per pod

    # Scale-down behavior
    autoscaling.knative.dev/scale-down-delay: "5m"
    autoscaling.knative.dev/scale-to-zero-pod-retention-period: "1m"

    # Window for averaging metrics
    autoscaling.knative.dev/window: "60s"
yaml
metadata:
  annotations:
    # 自动扩缩容器类型
    autoscaling.knative.dev/class: kpa.autoscaling.knative.dev

    # 扩缩容边界
    autoscaling.knative.dev/min-scale: "1"    # 禁止缩容至零
    autoscaling.knative.dev/max-scale: "100"
    autoscaling.knative.dev/initial-scale: "3"

    # 扩缩容指标(KPA)
    autoscaling.knative.dev/metric: concurrency  # 或rps
    autoscaling.knative.dev/target: "100"        # 每个Pod的目标值

    # 缩容行为
    autoscaling.knative.dev/scale-down-delay: "5m"
    autoscaling.knative.dev/scale-to-zero-pod-retention-period: "1m"

    # 指标平均窗口
    autoscaling.knative.dev/window: "60s"

Concurrency Limits

并发数限制

yaml
spec:
  template:
    spec:
      containerConcurrency: 100  # Hard limit per pod (0 = unlimited)
yaml
spec:
  template:
    spec:
      containerConcurrency: 100  # 每个Pod的硬限制(0表示无限制)

Networking

网络配置

Networking Layer Comparison

网络层对比

LayerProsConsUse Case
KourierLightweight, fast, simpleLimited featuresMost deployments
IstioFull service mesh, mTLSHeavy, complexEnterprise, security-critical
ContourEnvoy-based, good performanceMedium complexityHigh-traffic apps
网络层优势劣势适用场景
Kourier轻量、快速、简单功能有限多数部署场景
Istio完整服务网格、支持mTLS资源占用高、配置复杂企业级、安全敏感场景
Contour基于Envoy、性能优异复杂度中等高流量应用

TLS Configuration

TLS配置

yaml
undefined
yaml
undefined

Using cert-manager (recommended)

使用cert-manager(推荐方式)

apiVersion: serving.knative.dev/v1 kind: Service metadata: name: my-service annotations: # Auto-TLS with cert-manager networking.knative.dev/certificate-class: cert-manager spec:

... template spec

undefined
apiVersion: serving.knative.dev/v1 kind: Service metadata: name: my-service annotations: # 借助cert-manager自动配置TLS networking.knative.dev/certificate-class: cert-manager spec:

... 模板配置

undefined

Custom Domain Mapping

自定义域名映射

yaml
apiVersion: serving.knative.dev/v1beta1
kind: DomainMapping
metadata:
  name: api.example.com
  namespace: default
spec:
  ref:
    kind: Service
    name: my-service
    apiVersion: serving.knative.dev/v1
yaml
apiVersion: serving.knative.dev/v1beta1
kind: DomainMapping
metadata:
  name: api.example.com
  namespace: default
spec:
  ref:
    kind: Service
    name: my-service
    apiVersion: serving.knative.dev/v1

Eventing Patterns

事件驱动模式

Event Source Types

事件源类型

SourceDescriptionUse Case
PingSourceCron-based eventsScheduled tasks
ApiServerSourceK8s API eventsCluster monitoring
KafkaSourceKafka messagesStream processing
GitHubSourceGitHub webhooksCI/CD triggers
ContainerSourceCustom containerAny external system
事件源描述适用场景
PingSource基于Cron的事件源定时任务
ApiServerSourceKubernetes API事件源集群监控
KafkaSourceKafka消息事件源流处理
GitHubSourceGitHub WebHook事件源CI/CD触发
ContainerSource自定义容器事件源任意外部系统

Channel Types

通道类型

ChannelPersistenceUse Case
InMemoryChannelNoDevelopment only
KafkaChannelYesProduction
NATSChannelConfigurableHigh throughput
通道持久化支持适用场景
InMemoryChannel仅开发环境
KafkaChannel生产环境
NATSChannel可配置高吞吐量场景

Sequence (Chained Processing)

序列(链式处理)

yaml
apiVersion: flows.knative.dev/v1
kind: Sequence
metadata:
  name: my-sequence
spec:
  channelTemplate:
    apiVersion: messaging.knative.dev/v1
    kind: InMemoryChannel
  steps:
    - ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: step-1
    - ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: step-2
  reply:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: final-handler
yaml
apiVersion: flows.knative.dev/v1
kind: Sequence
metadata:
  name: my-sequence
spec:
  channelTemplate:
    apiVersion: messaging.knative.dev/v1
    kind: InMemoryChannel
  steps:
    - ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: step-1
    - ref:
        apiVersion: serving.knative.dev/v1
        kind: Service
        name: step-2
  reply:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: final-handler

Parallel (Fan-out)

并行(扇出处理)

yaml
apiVersion: flows.knative.dev/v1
kind: Parallel
metadata:
  name: my-parallel
spec:
  channelTemplate:
    apiVersion: messaging.knative.dev/v1
    kind: InMemoryChannel
  branches:
    - subscriber:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: handler-a
    - subscriber:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: handler-b
yaml
apiVersion: flows.knative.dev/v1
kind: Parallel
metadata:
  name: my-parallel
spec:
  channelTemplate:
    apiVersion: messaging.knative.dev/v1
    kind: InMemoryChannel
  branches:
    - subscriber:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: handler-a
    - subscriber:
        ref:
          apiVersion: serving.knative.dev/v1
          kind: Service
          name: handler-b

Knative Functions

Knative Functions

CLI Installation

CLI安装

bash
undefined
bash
undefined

macOS

macOS

brew install knative/client/func
brew install knative/client/func

Linux

Linux

curl -sL https://github.com/knative/func/releases/latest/download/func_linux_amd64 -o func chmod +x func && sudo mv func /usr/local/bin/
undefined
curl -sL https://github.com/knative/func/releases/latest/download/func_linux_amd64 -o func chmod +x func && sudo mv func /usr/local/bin/
undefined

Function Lifecycle

函数生命周期管理

bash
undefined
bash
undefined

Create a new function

创建新函数

func create -l python my-function cd my-function
func create -l python my-function cd my-function

Build the function

构建函数

func build
func build

Deploy to cluster

部署到集群

func deploy
func deploy

Invoke locally

本地调用

func invoke
func invoke

Invoke remote

远程调用

func invoke --target=remote
undefined
func invoke --target=remote
undefined

Supported Languages

支持的语言

LanguageTemplateRuntime
Go
go
Native binary
Node.js
node
Node.js 18+
Python
python
Python 3.9+
Quarkus
quarkus
GraalVM/JVM
Rust
rust
Native binary
TypeScript
typescript
Node.js
语言模板运行时
Go
go
原生二进制
Node.js
node
Node.js 18+
Python
python
Python 3.9+
Quarkus
quarkus
GraalVM/JVM
Rust
rust
原生二进制
TypeScript
typescript
Node.js

Function Configuration (func.yaml)

函数配置(func.yaml)

yaml
specVersion: 0.36.0
name: my-function
runtime: python
registry: docker.io/myuser
image: docker.io/myuser/my-function:latest
build:
  builder: pack
  buildpacks:
    - paketo-buildpacks/python
deploy:
  namespace: default
  annotations:
    autoscaling.knative.dev/min-scale: "1"
  env:
    - name: MY_VAR
      value: my-value
  volumes:
    - secret: my-secret
      path: /secrets
yaml
specVersion: 0.36.0
name: my-function
runtime: python
registry: docker.io/myuser
image: docker.io/myuser/my-function:latest
build:
  builder: pack
  buildpacks:
    - paketo-buildpacks/python
deploy:
  namespace: default
  annotations:
    autoscaling.knative.dev/min-scale: "1"
  env:
    - name: MY_VAR
      value: my-value
  volumes:
    - secret: my-secret
      path: /secrets

Best Practices

最佳实践

Resource Configuration

资源配置

ResourceRequirementNotes
CPU requestsRequiredSet based on actual usage
CPU limitsFORBIDDENCauses throttling
Memory requestsRequiredMatch your app needs
Memory limitsRequiredMust equal requests
yaml
resources:
  requests:
    cpu: 100m
    memory: 128Mi
  limits:
    memory: 128Mi  # Same as request
    # NO cpu limit!
资源要求说明
CPU请求必填根据实际使用情况设置
CPU限制禁止设置会导致请求节流
内存请求必填匹配应用实际需求
内存限制必填必须与请求值一致
yaml
resources:
  requests:
    cpu: 100m
    memory: 128Mi
  limits:
    memory: 128Mi  # 与请求值一致
    # 禁止设置CPU限制!

Probes Configuration

探针配置

yaml
spec:
  containers:
    - image: my-app
      readinessProbe:
        httpGet:
          path: /health
          port: 8080
        initialDelaySeconds: 5
        periodSeconds: 10
      # Liveness probe optional for serverless
      # (Knative handles pod lifecycle)
yaml
spec:
  containers:
    - image: my-app
      readinessProbe:
        httpGet:
          path: /health
          port: 8080
        initialDelaySeconds: 5
        periodSeconds: 10
      # 无服务器场景下存活探针可选
      # (Knative会处理Pod生命周期)

Cold Start Optimization

冷启动优化

  1. Keep minimum replicas: Set
    min-scale: "1"
    for latency-critical services
  2. Optimize container image: Use distroless/alpine base images
  3. Lazy initialization: Defer heavy initialization until first request
  4. Connection pooling: Pre-warm database connections
  1. 保留最小副本数:对延迟敏感的服务设置
    min-scale: "1"
  2. 优化容器镜像:使用distroless/alpine基础镜像
  3. 延迟初始化:将重型初始化操作推迟到首次请求时执行
  4. 连接池预热:提前初始化数据库连接池

Production Checklist

生产环境检查清单

  • Use KafkaChannel instead of InMemoryChannel
  • Configure proper resource requests/limits
  • Set up TLS with cert-manager
  • Configure custom domain
  • Set appropriate min/max scale values
  • Enable dead letter sink for Triggers
  • Configure monitoring (Prometheus metrics)
  • Set up proper RBAC
  • 使用KafkaChannel替代InMemoryChannel
  • 配置合理的资源请求/限制
  • 借助cert-manager配置TLS
  • 配置自定义域名
  • 设置合适的最小/最大扩缩容值
  • 为Triggers配置死信队列
  • 配置监控(Prometheus指标)
  • 设置正确的RBAC权限

Troubleshooting

故障排查

Common Issues

常见问题

SymptomCauseSolution
Service not accessibleDNS not configuredConfigure domain mapping or use sslip.io
Pods not scaling upActivator overloadedIncrease activator replicas
Slow cold startsLarge image or slow initOptimize image, use
min-scale: "1"
Events not deliveredBroker misconfiguredCheck Broker/Trigger status
503 errorsService scalingCheck activator logs, increase scale
Certificate errorscert-manager issueCheck ClusterIssuer and Certificate status
症状原因解决方案
服务无法访问DNS未配置配置域名映射或使用sslip.io
Pod无法扩容Activator组件过载增加Activator副本数
冷启动缓慢镜像过大或初始化耗时久优化镜像,设置
min-scale: "1"
事件无法投递Broker配置错误检查Broker/Trigger状态
503错误服务扩缩容中检查Activator日志,调整扩缩容配置
证书错误cert-manager异常检查ClusterIssuer和Certificate状态

Diagnostic Commands

诊断命令

bash
undefined
bash
undefined

Check Knative Serving status

检查Knative Serving状态

kubectl get ksvc -A kubectl describe ksvc <service-name>
kubectl get ksvc -A kubectl describe ksvc <service-name>

Check revisions

检查版本修订

kubectl get revisions -A kubectl describe revision <revision-name>
kubectl get revisions -A kubectl describe revision <revision-name>

Check routes

检查路由

kubectl get routes -A
kubectl get routes -A

Check Knative Eventing status

检查Knative Eventing状态

kubectl get brokers -A kubectl get triggers -A kubectl get sources -A
kubectl get brokers -A kubectl get triggers -A kubectl get sources -A

Check event delivery

检查事件投递

kubectl get subscriptions -A
kubectl get subscriptions -A

View activator logs

查看Activator日志

kubectl logs -n knative-serving -l app=activator -c activator
kubectl logs -n knative-serving -l app=activator -c activator

View controller logs

查看控制器日志

kubectl logs -n knative-serving -l app=controller
kubectl logs -n knative-serving -l app=controller

Check networking layer (Kourier)

检查网络层(Kourier)

kubectl logs -n kourier-system -l app=3scale-kourier-gateway
undefined
kubectl logs -n kourier-system -l app=3scale-kourier-gateway
undefined

Debug Event Flow

调试事件流

bash
undefined
bash
undefined

Deploy event-display service for debugging

部署event-display服务用于调试

kubectl apply -f - <<EOF apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: gcr.io/knative-releases/knative.dev/eventing/cmd/event_display EOF
kubectl apply -f - <<EOF apiVersion: serving.knative.dev/v1 kind: Service metadata: name: event-display spec: template: spec: containers: - image: gcr.io/knative-releases/knative.dev/eventing/cmd/event_display EOF

Create a trigger to route all events

创建Trigger路由所有事件

kubectl apply -f - <<EOF apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: debug-trigger spec: broker: default subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display EOF
kubectl apply -f - <<EOF apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: debug-trigger spec: broker: default subscriber: ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display EOF

Watch events

监听事件

kubectl logs -l serving.knative.dev/service=event-display -c user-container -f
undefined
kubectl logs -l serving.knative.dev/service=event-display -c user-container -f
undefined

Integration with ArgoCD

与ArgoCD集成

ApplicationSet for Knative Services

Knative服务的ApplicationSet

yaml
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: knative-services
spec:
  generators:
    - git:
        repoURL: https://github.com/org/repo.git
        revision: HEAD
        directories:
          - path: knative-services/*
  template:
    metadata:
      name: '{{path.basename}}'
      annotations:
        # Disable SSA if using Jobs
        argocd.argoproj.io/compare-options: ServerSideDiff=false
    spec:
      project: default
      source:
        repoURL: https://github.com/org/repo.git
        targetRevision: HEAD
        path: '{{path}}'
      destination:
        server: https://kubernetes.default.svc
        namespace: default
      syncPolicy:
        automated:
          prune: true
          selfHeal: true
yaml
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
  name: knative-services
spec:
  generators:
    - git:
        repoURL: https://github.com/org/repo.git
        revision: HEAD
        directories:
          - path: knative-services/*
  template:
    metadata:
      name: '{{path.basename}}'
      annotations:
        # 若使用Jobs则禁用SSA
        argocd.argoproj.io/compare-options: ServerSideDiff=false
    spec:
      project: default
      source:
        repoURL: https://github.com/org/repo.git
        targetRevision: HEAD
        path: '{{path}}'
      destination:
        server: https://kubernetes.default.svc
        namespace: default
      syncPolicy:
        automated:
          prune: true
          selfHeal: true

Health Checks for Knative Resources

Knative资源的健康检查

ArgoCD automatically recognizes Knative resources. Custom health checks:
yaml
undefined
ArgoCD会自动识别Knative资源。自定义健康检查配置:
yaml
undefined

In argocd-cm ConfigMap

在argocd-cm ConfigMap中配置

data: resource.customizations.health.serving.knative.dev_Service: | hs = {} if obj.status ~= nil then if obj.status.conditions ~= nil then for _, condition in ipairs(obj.status.conditions) do if condition.type == "Ready" and condition.status == "True" then hs.status = "Healthy" hs.message = "Service is ready" return hs end end end end hs.status = "Progressing" hs.message = "Waiting for service to be ready" return hs
undefined
data: resource.customizations.health.serving.knative.dev_Service: | hs = {} if obj.status ~= nil then if obj.status.conditions ~= nil then for _, condition in ipairs(obj.status.conditions) do if condition.type == "Ready" and condition.status == "True" then hs.status = "Healthy" hs.message = "Service is ready" return hs end end end end hs.status = "Progressing" hs.message = "Waiting for service to be ready" return hs
undefined

References

参考资料