knative
Compare original and translation side by side
🇺🇸
Original
English🇨🇳
Translation
ChineseKnative Skill
Knative 技能指南
Overview
概述
Knative is an open-source Kubernetes-based platform for deploying and managing serverless workloads. It provides three main components:
| Component | Purpose | Key Features |
|---|---|---|
| Serving | HTTP-triggered autoscaling runtime | Scale-to-zero, traffic splitting, revisions |
| Eventing | Event-driven architectures | Brokers, Triggers, Sources, CloudEvents |
| Functions | Simplified function deployment | |
Current Version: v1.20.0 (as of late 2024)
Knative 是一个基于Kubernetes的开源平台,用于部署和管理无服务器工作负载。它提供三个核心组件:
| 组件 | 用途 | 核心特性 |
|---|---|---|
| Serving | HTTP触发的自动扩缩容运行时 | 缩容至零、流量拆分、版本修订 |
| Eventing | 事件驱动架构支持 | Brokers、Triggers、Sources、CloudEvents |
| Functions | 简化的函数部署 | |
当前版本:v1.20.0(截至2024年末)
When to Use This Skill
适用场景
Use this skill when the user:
- Wants to deploy serverless workloads on Kubernetes
- Needs scale-to-zero autoscaling capabilities
- Is implementing event-driven architectures
- Needs traffic management (blue-green, canary, gradual rollout)
- Works with CloudEvents or event routing
- Mentions Knative Serving, Eventing, or Functions
- Asks about Brokers, Triggers, Sources, or Sinks
- Needs to configure networking layers (Kourier, Istio, Contour)
当用户有以下需求时,可使用本技能:
- 希望在Kubernetes上部署无服务器工作负载
- 需要缩容至零的自动扩缩容能力
- 正在构建事件驱动架构
- 需要流量管理(蓝绿发布、金丝雀发布、逐步灰度发布)
- 涉及CloudEvents或事件路由操作
- 提及Knative Serving、Eventing或Functions组件
- 咨询Brokers、Triggers、Sources或Sinks的使用
- 需要配置网络层(Kourier、Istio、Contour)
Quick Reference
快速参考
Knative Service (Serving)
Knative Service(Serving组件)
yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
template:
metadata:
annotations:
# Autoscaling configuration
autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
autoscaling.knative.dev/min-scale: "0"
autoscaling.knative.dev/max-scale: "10"
autoscaling.knative.dev/target: "100" # concurrent requests
spec:
containers:
- image: gcr.io/my-project/my-app:latest
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
memory: 128Mi # Memory limit = request (required)
# NO CPU limits (causes throttling)yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: my-service
namespace: default
spec:
template:
metadata:
annotations:
# 自动扩缩容配置
autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
autoscaling.knative.dev/min-scale: "0"
autoscaling.knative.dev/max-scale: "10"
autoscaling.knative.dev/target: "100" # 并发请求数
spec:
containers:
- image: gcr.io/my-project/my-app:latest
ports:
- containerPort: 8080
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
memory: 128Mi # 内存限制需与请求值一致(必填)
# 禁止设置CPU限制(会导致请求节流)Traffic Splitting
流量拆分
yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: my-service
spec:
template:
# ... container spec
traffic:
# Blue-green: 100% to new or old
- revisionName: my-service-00001
percent: 90
- revisionName: my-service-00002
percent: 10
# Or use latestRevision
- latestRevision: true
percent: 100yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: my-service
spec:
template:
# ... 容器配置
traffic:
# 蓝绿发布:100%流量指向新版本或旧版本
- revisionName: my-service-00001
percent: 90
- revisionName: my-service-00002
percent: 10
# 或使用latestRevision
- latestRevision: true
percent: 100Tagged Revisions (Preview URLs)
带标签的版本修订(预览URL)
yaml
traffic:
- revisionName: my-service-00002
percent: 0
tag: staging # Accessible at staging-my-service.example.com
- latestRevision: true
percent: 100
tag: productionyaml
traffic:
- revisionName: my-service-00002
percent: 0
tag: staging # 可通过staging-my-service.example.com访问
- latestRevision: true
percent: 100
tag: productionBroker and Trigger (Eventing)
Broker与Trigger(Eventing组件)
yaml
undefinedyaml
undefinedCreate a Broker
创建Broker
apiVersion: eventing.knative.dev/v1 kind: Broker metadata: name: default namespace: default
apiVersion: eventing.knative.dev/v1 kind: Broker metadata: name: default namespace: default
Create a Trigger to route events
创建Trigger路由事件
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: my-trigger
namespace: default
spec:
broker: default
filter:
attributes:
type: dev.knative.example
source: /my/source
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: my-service
undefinedapiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: my-trigger
namespace: default
spec:
broker: default
filter:
attributes:
type: dev.knative.example
source: /my/source
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: my-service
undefinedEvent Source Example (PingSource)
事件源示例(PingSource)
yaml
apiVersion: sources.knative.dev/v1
kind: PingSource
metadata:
name: cron-job
spec:
schedule: "*/1 * * * *" # Every minute
contentType: application/json
data: '{"message": "Hello from cron"}'
sink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: event-displayyaml
apiVersion: sources.knative.dev/v1
kind: PingSource
metadata:
name: cron-job
spec:
schedule: "*/1 * * * *" # 每分钟执行一次
contentType: application/json
data: '{"message": "Hello from cron"}'
sink:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: event-displayInstallation
安装步骤
Prerequisites
前置条件
- Kubernetes cluster v1.28+
- configured
kubectl - Cluster admin permissions
- Kubernetes集群版本v1.28+
- 已配置
kubectl - 集群管理员权限
Method 1: YAML Install (Recommended for GitOps)
方法1:YAML安装(GitOps推荐方式)
bash
undefinedbash
undefinedInstall Knative Serving CRDs and Core
安装Knative Serving CRD及核心组件
Install Networking Layer (choose one)
安装网络层(选择其中一种)
Option A: Kourier (lightweight, recommended for most cases)
选项A:Kourier(轻量型,多数场景推荐)
kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.20.0/kourier.yaml
kubectl patch configmap/config-network
--namespace knative-serving
--type merge
--patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'
--namespace knative-serving
--type merge
--patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'
kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.20.0/kourier.yaml
kubectl patch configmap/config-network
--namespace knative-serving
--type merge
--patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'
--namespace knative-serving
--type merge
--patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'
Option B: Istio (for service mesh requirements)
选项B:Istio(适用于服务网格需求)
Install Knative Eventing
安装Knative Eventing
Install In-Memory Channel (dev only) or Kafka Channel (production)
安装内存通道(仅开发环境)或Kafka通道(生产环境)
Install MT-Channel-Based Broker
安装基于MT通道的Broker
kubectl apply -f https://github.com/knative/eventing/releases/download/knative-v1.20.0/mt-channel-broker.yaml
undefinedkubectl apply -f https://github.com/knative/eventing/releases/download/knative-v1.20.0/mt-channel-broker.yaml
undefinedMethod 2: Knative Operator
方法2:Knative Operator
yaml
undefinedyaml
undefinedInstall the Operator
安装Operator
Deploy Knative Serving
部署Knative Serving
apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
ingress:
kourier:
enabled: true
config:
network:
ingress-class: kourier.ingress.networking.knative.dev
apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
name: knative-serving
namespace: knative-serving
spec:
ingress:
kourier:
enabled: true
config:
network:
ingress-class: kourier.ingress.networking.knative.dev
Deploy Knative Eventing
部署Knative Eventing
apiVersion: operator.knative.dev/v1beta1
kind: KnativeEventing
metadata:
name: knative-eventing
namespace: knative-eventing
undefinedapiVersion: operator.knative.dev/v1beta1
kind: KnativeEventing
metadata:
name: knative-eventing
namespace: knative-eventing
undefinedConfigure DNS
配置DNS
bash
undefinedbash
undefinedGet the External IP of the ingress
获取Ingress的外部IP
kubectl get svc kourier -n kourier-system
kubectl get svc kourier -n kourier-system
Option A: Real DNS (production)
选项A:真实DNS(生产环境)
Create A record: *.knative.example.com -> EXTERNAL-IP
创建A记录:*.knative.example.com -> EXTERNAL-IP
Option B: Magic DNS with sslip.io (development)
选项B:使用sslip.io的Magic DNS(开发环境)
kubectl patch configmap/config-domain
--namespace knative-serving
--type merge
--patch '{"data":{"<EXTERNAL-IP>.sslip.io":""}}'
--namespace knative-serving
--type merge
--patch '{"data":{"<EXTERNAL-IP>.sslip.io":""}}'
kubectl patch configmap/config-domain
--namespace knative-serving
--type merge
--patch '{"data":{"<EXTERNAL-IP>.sslip.io":""}}'
--namespace knative-serving
--type merge
--patch '{"data":{"<EXTERNAL-IP>.sslip.io":""}}'
Option C: Default domain (internal only)
选项C:默认域名(仅内部访问)
kubectl patch configmap/config-domain
--namespace knative-serving
--type merge
--patch '{"data":{"example.com":""}}'
--namespace knative-serving
--type merge
--patch '{"data":{"example.com":""}}'
undefinedkubectl patch configmap/config-domain
--namespace knative-serving
--type merge
--patch '{"data":{"example.com":""}}'
--namespace knative-serving
--type merge
--patch '{"data":{"example.com":""}}'
undefinedAutoscaling
自动扩缩容
Autoscaler Classes
自动扩缩容器类型
| Class | Key | Scale to Zero | Metrics |
|---|---|---|---|
| KPA (default) | | Yes | Concurrency, RPS |
| HPA | | No | CPU, Memory |
| 类型 | 标识 | 支持缩容至零 | 指标 |
|---|---|---|---|
| KPA(默认) | | 是 | 并发数、每秒请求数(RPS) |
| HPA | | 否 | CPU、内存 |
Autoscaling Annotations
自动扩缩容注解
yaml
metadata:
annotations:
# Autoscaler class
autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
# Scale bounds
autoscaling.knative.dev/min-scale: "1" # Prevent scale-to-zero
autoscaling.knative.dev/max-scale: "100"
autoscaling.knative.dev/initial-scale: "3"
# Scaling metric (KPA)
autoscaling.knative.dev/metric: concurrency # or rps
autoscaling.knative.dev/target: "100" # target per pod
# Scale-down behavior
autoscaling.knative.dev/scale-down-delay: "5m"
autoscaling.knative.dev/scale-to-zero-pod-retention-period: "1m"
# Window for averaging metrics
autoscaling.knative.dev/window: "60s"yaml
metadata:
annotations:
# 自动扩缩容器类型
autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
# 扩缩容边界
autoscaling.knative.dev/min-scale: "1" # 禁止缩容至零
autoscaling.knative.dev/max-scale: "100"
autoscaling.knative.dev/initial-scale: "3"
# 扩缩容指标(KPA)
autoscaling.knative.dev/metric: concurrency # 或rps
autoscaling.knative.dev/target: "100" # 每个Pod的目标值
# 缩容行为
autoscaling.knative.dev/scale-down-delay: "5m"
autoscaling.knative.dev/scale-to-zero-pod-retention-period: "1m"
# 指标平均窗口
autoscaling.knative.dev/window: "60s"Concurrency Limits
并发数限制
yaml
spec:
template:
spec:
containerConcurrency: 100 # Hard limit per pod (0 = unlimited)yaml
spec:
template:
spec:
containerConcurrency: 100 # 每个Pod的硬限制(0表示无限制)Networking
网络配置
Networking Layer Comparison
网络层对比
| Layer | Pros | Cons | Use Case |
|---|---|---|---|
| Kourier | Lightweight, fast, simple | Limited features | Most deployments |
| Istio | Full service mesh, mTLS | Heavy, complex | Enterprise, security-critical |
| Contour | Envoy-based, good performance | Medium complexity | High-traffic apps |
| 网络层 | 优势 | 劣势 | 适用场景 |
|---|---|---|---|
| Kourier | 轻量、快速、简单 | 功能有限 | 多数部署场景 |
| Istio | 完整服务网格、支持mTLS | 资源占用高、配置复杂 | 企业级、安全敏感场景 |
| Contour | 基于Envoy、性能优异 | 复杂度中等 | 高流量应用 |
TLS Configuration
TLS配置
yaml
undefinedyaml
undefinedUsing cert-manager (recommended)
使用cert-manager(推荐方式)
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: my-service
annotations:
# Auto-TLS with cert-manager
networking.knative.dev/certificate-class: cert-manager
spec:
... template spec
undefinedapiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: my-service
annotations:
# 借助cert-manager自动配置TLS
networking.knative.dev/certificate-class: cert-manager
spec:
... 模板配置
undefinedCustom Domain Mapping
自定义域名映射
yaml
apiVersion: serving.knative.dev/v1beta1
kind: DomainMapping
metadata:
name: api.example.com
namespace: default
spec:
ref:
kind: Service
name: my-service
apiVersion: serving.knative.dev/v1yaml
apiVersion: serving.knative.dev/v1beta1
kind: DomainMapping
metadata:
name: api.example.com
namespace: default
spec:
ref:
kind: Service
name: my-service
apiVersion: serving.knative.dev/v1Eventing Patterns
事件驱动模式
Event Source Types
事件源类型
| Source | Description | Use Case |
|---|---|---|
| PingSource | Cron-based events | Scheduled tasks |
| ApiServerSource | K8s API events | Cluster monitoring |
| KafkaSource | Kafka messages | Stream processing |
| GitHubSource | GitHub webhooks | CI/CD triggers |
| ContainerSource | Custom container | Any external system |
| 事件源 | 描述 | 适用场景 |
|---|---|---|
| PingSource | 基于Cron的事件源 | 定时任务 |
| ApiServerSource | Kubernetes API事件源 | 集群监控 |
| KafkaSource | Kafka消息事件源 | 流处理 |
| GitHubSource | GitHub WebHook事件源 | CI/CD触发 |
| ContainerSource | 自定义容器事件源 | 任意外部系统 |
Channel Types
通道类型
| Channel | Persistence | Use Case |
|---|---|---|
| InMemoryChannel | No | Development only |
| KafkaChannel | Yes | Production |
| NATSChannel | Configurable | High throughput |
| 通道 | 持久化支持 | 适用场景 |
|---|---|---|
| InMemoryChannel | 无 | 仅开发环境 |
| KafkaChannel | 是 | 生产环境 |
| NATSChannel | 可配置 | 高吞吐量场景 |
Sequence (Chained Processing)
序列(链式处理)
yaml
apiVersion: flows.knative.dev/v1
kind: Sequence
metadata:
name: my-sequence
spec:
channelTemplate:
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
steps:
- ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: step-1
- ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: step-2
reply:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: final-handleryaml
apiVersion: flows.knative.dev/v1
kind: Sequence
metadata:
name: my-sequence
spec:
channelTemplate:
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
steps:
- ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: step-1
- ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: step-2
reply:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: final-handlerParallel (Fan-out)
并行(扇出处理)
yaml
apiVersion: flows.knative.dev/v1
kind: Parallel
metadata:
name: my-parallel
spec:
channelTemplate:
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
branches:
- subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: handler-a
- subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: handler-byaml
apiVersion: flows.knative.dev/v1
kind: Parallel
metadata:
name: my-parallel
spec:
channelTemplate:
apiVersion: messaging.knative.dev/v1
kind: InMemoryChannel
branches:
- subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: handler-a
- subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: handler-bKnative Functions
Knative Functions
CLI Installation
CLI安装
bash
undefinedbash
undefinedmacOS
macOS
brew install knative/client/func
brew install knative/client/func
Linux
Linux
curl -sL https://github.com/knative/func/releases/latest/download/func_linux_amd64 -o func
chmod +x func && sudo mv func /usr/local/bin/
undefinedcurl -sL https://github.com/knative/func/releases/latest/download/func_linux_amd64 -o func
chmod +x func && sudo mv func /usr/local/bin/
undefinedFunction Lifecycle
函数生命周期管理
bash
undefinedbash
undefinedCreate a new function
创建新函数
func create -l python my-function
cd my-function
func create -l python my-function
cd my-function
Build the function
构建函数
func build
func build
Deploy to cluster
部署到集群
func deploy
func deploy
Invoke locally
本地调用
func invoke
func invoke
Invoke remote
远程调用
func invoke --target=remote
undefinedfunc invoke --target=remote
undefinedSupported Languages
支持的语言
| Language | Template | Runtime |
|---|---|---|
| Go | | Native binary |
| Node.js | | Node.js 18+ |
| Python | | Python 3.9+ |
| Quarkus | | GraalVM/JVM |
| Rust | | Native binary |
| TypeScript | | Node.js |
| 语言 | 模板 | 运行时 |
|---|---|---|
| Go | | 原生二进制 |
| Node.js | | Node.js 18+ |
| Python | | Python 3.9+ |
| Quarkus | | GraalVM/JVM |
| Rust | | 原生二进制 |
| TypeScript | | Node.js |
Function Configuration (func.yaml)
函数配置(func.yaml)
yaml
specVersion: 0.36.0
name: my-function
runtime: python
registry: docker.io/myuser
image: docker.io/myuser/my-function:latest
build:
builder: pack
buildpacks:
- paketo-buildpacks/python
deploy:
namespace: default
annotations:
autoscaling.knative.dev/min-scale: "1"
env:
- name: MY_VAR
value: my-value
volumes:
- secret: my-secret
path: /secretsyaml
specVersion: 0.36.0
name: my-function
runtime: python
registry: docker.io/myuser
image: docker.io/myuser/my-function:latest
build:
builder: pack
buildpacks:
- paketo-buildpacks/python
deploy:
namespace: default
annotations:
autoscaling.knative.dev/min-scale: "1"
env:
- name: MY_VAR
value: my-value
volumes:
- secret: my-secret
path: /secretsBest Practices
最佳实践
Resource Configuration
资源配置
| Resource | Requirement | Notes |
|---|---|---|
| CPU requests | Required | Set based on actual usage |
| CPU limits | FORBIDDEN | Causes throttling |
| Memory requests | Required | Match your app needs |
| Memory limits | Required | Must equal requests |
yaml
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
memory: 128Mi # Same as request
# NO cpu limit!| 资源 | 要求 | 说明 |
|---|---|---|
| CPU请求 | 必填 | 根据实际使用情况设置 |
| CPU限制 | 禁止设置 | 会导致请求节流 |
| 内存请求 | 必填 | 匹配应用实际需求 |
| 内存限制 | 必填 | 必须与请求值一致 |
yaml
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
memory: 128Mi # 与请求值一致
# 禁止设置CPU限制!Probes Configuration
探针配置
yaml
spec:
containers:
- image: my-app
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
# Liveness probe optional for serverless
# (Knative handles pod lifecycle)yaml
spec:
containers:
- image: my-app
readinessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
# 无服务器场景下存活探针可选
# (Knative会处理Pod生命周期)Cold Start Optimization
冷启动优化
- Keep minimum replicas: Set for latency-critical services
min-scale: "1" - Optimize container image: Use distroless/alpine base images
- Lazy initialization: Defer heavy initialization until first request
- Connection pooling: Pre-warm database connections
- 保留最小副本数:对延迟敏感的服务设置
min-scale: "1" - 优化容器镜像:使用distroless/alpine基础镜像
- 延迟初始化:将重型初始化操作推迟到首次请求时执行
- 连接池预热:提前初始化数据库连接池
Production Checklist
生产环境检查清单
- Use KafkaChannel instead of InMemoryChannel
- Configure proper resource requests/limits
- Set up TLS with cert-manager
- Configure custom domain
- Set appropriate min/max scale values
- Enable dead letter sink for Triggers
- Configure monitoring (Prometheus metrics)
- Set up proper RBAC
- 使用KafkaChannel替代InMemoryChannel
- 配置合理的资源请求/限制
- 借助cert-manager配置TLS
- 配置自定义域名
- 设置合适的最小/最大扩缩容值
- 为Triggers配置死信队列
- 配置监控(Prometheus指标)
- 设置正确的RBAC权限
Troubleshooting
故障排查
Common Issues
常见问题
| Symptom | Cause | Solution |
|---|---|---|
| Service not accessible | DNS not configured | Configure domain mapping or use sslip.io |
| Pods not scaling up | Activator overloaded | Increase activator replicas |
| Slow cold starts | Large image or slow init | Optimize image, use |
| Events not delivered | Broker misconfigured | Check Broker/Trigger status |
| 503 errors | Service scaling | Check activator logs, increase scale |
| Certificate errors | cert-manager issue | Check ClusterIssuer and Certificate status |
| 症状 | 原因 | 解决方案 |
|---|---|---|
| 服务无法访问 | DNS未配置 | 配置域名映射或使用sslip.io |
| Pod无法扩容 | Activator组件过载 | 增加Activator副本数 |
| 冷启动缓慢 | 镜像过大或初始化耗时久 | 优化镜像,设置 |
| 事件无法投递 | Broker配置错误 | 检查Broker/Trigger状态 |
| 503错误 | 服务扩缩容中 | 检查Activator日志,调整扩缩容配置 |
| 证书错误 | cert-manager异常 | 检查ClusterIssuer和Certificate状态 |
Diagnostic Commands
诊断命令
bash
undefinedbash
undefinedCheck Knative Serving status
检查Knative Serving状态
kubectl get ksvc -A
kubectl describe ksvc <service-name>
kubectl get ksvc -A
kubectl describe ksvc <service-name>
Check revisions
检查版本修订
kubectl get revisions -A
kubectl describe revision <revision-name>
kubectl get revisions -A
kubectl describe revision <revision-name>
Check routes
检查路由
kubectl get routes -A
kubectl get routes -A
Check Knative Eventing status
检查Knative Eventing状态
kubectl get brokers -A
kubectl get triggers -A
kubectl get sources -A
kubectl get brokers -A
kubectl get triggers -A
kubectl get sources -A
Check event delivery
检查事件投递
kubectl get subscriptions -A
kubectl get subscriptions -A
View activator logs
查看Activator日志
kubectl logs -n knative-serving -l app=activator -c activator
kubectl logs -n knative-serving -l app=activator -c activator
View controller logs
查看控制器日志
kubectl logs -n knative-serving -l app=controller
kubectl logs -n knative-serving -l app=controller
Check networking layer (Kourier)
检查网络层(Kourier)
kubectl logs -n kourier-system -l app=3scale-kourier-gateway
undefinedkubectl logs -n kourier-system -l app=3scale-kourier-gateway
undefinedDebug Event Flow
调试事件流
bash
undefinedbash
undefinedDeploy event-display service for debugging
部署event-display服务用于调试
kubectl apply -f - <<EOF
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: event-display
spec:
template:
spec:
containers:
- image: gcr.io/knative-releases/knative.dev/eventing/cmd/event_display
EOF
kubectl apply -f - <<EOF
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: event-display
spec:
template:
spec:
containers:
- image: gcr.io/knative-releases/knative.dev/eventing/cmd/event_display
EOF
Create a trigger to route all events
创建Trigger路由所有事件
kubectl apply -f - <<EOF
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: debug-trigger
spec:
broker: default
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: event-display
EOF
kubectl apply -f - <<EOF
apiVersion: eventing.knative.dev/v1
kind: Trigger
metadata:
name: debug-trigger
spec:
broker: default
subscriber:
ref:
apiVersion: serving.knative.dev/v1
kind: Service
name: event-display
EOF
Watch events
监听事件
kubectl logs -l serving.knative.dev/service=event-display -c user-container -f
undefinedkubectl logs -l serving.knative.dev/service=event-display -c user-container -f
undefinedIntegration with ArgoCD
与ArgoCD集成
ApplicationSet for Knative Services
Knative服务的ApplicationSet
yaml
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: knative-services
spec:
generators:
- git:
repoURL: https://github.com/org/repo.git
revision: HEAD
directories:
- path: knative-services/*
template:
metadata:
name: '{{path.basename}}'
annotations:
# Disable SSA if using Jobs
argocd.argoproj.io/compare-options: ServerSideDiff=false
spec:
project: default
source:
repoURL: https://github.com/org/repo.git
targetRevision: HEAD
path: '{{path}}'
destination:
server: https://kubernetes.default.svc
namespace: default
syncPolicy:
automated:
prune: true
selfHeal: trueyaml
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: knative-services
spec:
generators:
- git:
repoURL: https://github.com/org/repo.git
revision: HEAD
directories:
- path: knative-services/*
template:
metadata:
name: '{{path.basename}}'
annotations:
# 若使用Jobs则禁用SSA
argocd.argoproj.io/compare-options: ServerSideDiff=false
spec:
project: default
source:
repoURL: https://github.com/org/repo.git
targetRevision: HEAD
path: '{{path}}'
destination:
server: https://kubernetes.default.svc
namespace: default
syncPolicy:
automated:
prune: true
selfHeal: trueHealth Checks for Knative Resources
Knative资源的健康检查
ArgoCD automatically recognizes Knative resources. Custom health checks:
yaml
undefinedArgoCD会自动识别Knative资源。自定义健康检查配置:
yaml
undefinedIn argocd-cm ConfigMap
在argocd-cm ConfigMap中配置
data:
resource.customizations.health.serving.knative.dev_Service: |
hs = {}
if obj.status ~= nil then
if obj.status.conditions ~= nil then
for _, condition in ipairs(obj.status.conditions) do
if condition.type == "Ready" and condition.status == "True" then
hs.status = "Healthy"
hs.message = "Service is ready"
return hs
end
end
end
end
hs.status = "Progressing"
hs.message = "Waiting for service to be ready"
return hs
undefineddata:
resource.customizations.health.serving.knative.dev_Service: |
hs = {}
if obj.status ~= nil then
if obj.status.conditions ~= nil then
for _, condition in ipairs(obj.status.conditions) do
if condition.type == "Ready" and condition.status == "True" then
hs.status = "Healthy"
hs.message = "Service is ready"
return hs
end
end
end
end
hs.status = "Progressing"
hs.message = "Waiting for service to be ready"
return hs
undefined