数据平面Envoy部分三:Envoy静态配置


实验参考:https://gitee.com/mageedu/servicemesh_in_practise

纯静态资源配置方式主是直接在配置文件中通过static_resources配置参数明确定义listeners、 clusters和secrets的配置方式,各配置参数的数据类型如下面的配置所示;

? 其中,listeners用于配置纯静态类型的侦听器列表,clusters用于定义可用的集群列表及每个集群的 端点,而可选的secrets用于定义TLS通信中用到数字证书等配置信息
? 具体使用时,admin和static_resources两参数即可提供一个最小化的资源配置,甚至admin也可省略 

{
"listeners": [],
"clusters": [],
"secrets": []
}

一、envoy启动

~]$ envoy -c config>.{json,yaml,pb,pb_text}

◆扩展名代表了配置信息的组织格式;

引导配置是Envoy配置信息的基点,用于承载Envoy的初始配置,它可能 包括静态资源和动态资源的定义

◆静态资源(static_resources)于启动直接加载

◆动态资源(dynamic_resources)则需要通过配置的xDS服务获取并生成

通常,Listener和Cluster是Envoy得以运行的基础,而二者的配置可以全部为静态格式,也 可以混合 使 用动态及静态方式提供,或者全部配置为动态;

◆例如,一个yaml格式纯静态的基础配置框架:
static_resources
  listeners: 
  - name: ...
    address: {}
    filter_chains: []
    clusters: 
    - name: ...
      type: ...
      connect_timeout: {}
      dns_lookup_family: V4_ONLY
      load_assignment: {}

二、Listener的简易静态配置

侦听器主要用于定义Envoy监听的用于接收Downstreams请求的套接字、用于处理请求时调 用的过滤器链及相关的其它配置属性;

listener的配置格式

static_resources:
  listeners:  
  - name:    #侦听器的名称
    address:  
      socket_address:
        address:   #侦听的ip地址
        port_value:   #侦听的端口
    filter_chains: 
    - filters:
      - name:    #配置filter chain
        config:

L4过滤器echo主要用 于演示网络过滤器API的功能,它会回显接收到的所有数据至下游的请求者; 在配置文件中调用时其名称为envoy.filters.network.echo;

? 下面是一个最简单的静态侦听器配置示例

static_resources:
  listeners:  #配置侦听器
  - name: listener_0   #侦听器的名称
    address: #配置侦听器的地址信息
      socket_address:
        address: 0.0.0.0  #侦听的ip地址
        port_value: 8080  #侦听的端口
    filter_chains: #配置侦听器的filter chains
    - filters:
      - name: envoy.filters.network.echo #使用什么filter,这里是使用的echo filter

三、Cluster简易静态配置

“L7 Front Proxy ”添加管理接口的方法,仅需要在其用到的envoy.yaml 配置文件中添加相关的配置信息即可;

下面给出的简单的测试命令

admin:
  access_log_path: /tmp/admin_access.log
  address: 
    socket_address: { address: 0.0.0.0, port_value: 9901 }

static_resources:
  listeners:
  - name: listener_0
    address: 
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config: 
          "@type":
type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
        stat_prefix: ingress_http
        codec_type: AUTO
        ……

  clusters:
  - name: local_cluster
    connect_timeout: 0.25s
    type: STATIC
    ……

3、管理接口admin速览

 admin接口内置了多个/path,不同的path可能会分别接受不同的GET或POST请求;

admin commands are:
/: Admin home page # GET

/ready:Outputs a string and error code reflecting the state of the server. # GET ,返回envoy服务当前的状态;

/certs: print certs on machine # GET,列出已加载的所有TLS 证书及相关的信息;

/clusters: upstream cluster status  # GET,额外支持使用“GET /clusters?format=json ”

/config_dump: dump current Envoy configs # GET,打印Envoy加载的各类配置信息;支持include_eds、master和resource 等查询参数;

/contention: dump current Envoy mutex contention stats (if enabled) # GET,互斥跟踪

/cpuprofiler: enable/disable the CPU profiler# POST,启用或禁用cpuprofiler 

/healthcheck/fail: cause the server to fail health checks # POST,强制设定HTTP健康状态检查为失败;

/healthcheck/ok: cause the server to pass health checks # POST,强制设定HTTP健康状态检查为成功;

/heapprofiler: enable/disable the heap profiler # POST,启用或禁用heapprofiler ;

/help: print out list of admin commands /hot_restart_version: print the hot restart compatibility version# GET,打印热重启相关的信息;
 
/listeners: print listener addresses# GET,列出所有侦听器,支持使用“GET /listeners?format=json”
 
/drain_listeners:Drains all listeners. # POST ,驱逐所有的listener,支持使用inboundonly (仅入站侦听器)和graceful(优雅关闭)等查询参数;
 
/logging: query/change logging levels# POST,启用或禁用不同子组件上的不同日志记录级别
 
/memory: print current allocation/heap usage# POST,打印当前内在分配信息,以字节为单位;
 
/quitquitquit: exit the server# POST,干净退出服务器;
 

/reset_counters: reset all counters to zero # POST ,重围所有计数器;

/tap:This endpoint is used for configuring an active tap session. # POST ,用于配置活动的带标签的session;

/reopen_logs:Triggers reopen of all access logs. Behavior is similar to SIGUSR1 handling. # POST ,重新打开所有的日志,功能类似于SIGUSR1信号;
 
/runtime: print runtime values# GET,以json 格式输出所有运行时相关值;
 

/runtime_modify: modify runtime values # POST /runtime_modify?key1=value1&key2=value2 ,添加或修改在查询参数中传递的运行时值 /server_info: print server version/status information # GET,打印当前Envoy Server 的相关信息;
 
/stats: print server stats# 按需输出统计数据,例如GET /stats?filter= regex,另外还支持json和prometheus两种输出格式;
 
/stats/prometheus: print server stats in prometheus format# 输出prometheus格式的统计信息;

示例输出

1)GET /clusters:列出所有已配置的集群,包括每个集群中发现的所有上游主机以及 每个主机的统计信 息;支持输出为json格式;
集群管理器信息:“version_info string”,无CDS时,则显示为“version_info::static”。
集群相关的信息:断路器、异常点检测和用于表示是否通过CDS添加的标识“add_via_api”。
每个主机的统计信息:包括总连接数、活动连接数、总请求数和主机的健康状态等;不健康的原因通常有以下三种 
(1)failed_active_hc:未通过主动健康状态检测;
(2)failed_eds_health:被EDS标记为不健康;
(3)failed_outlier_check:未通过异常检测机制的检查;

2)GET /listeners: 列出所有已配置的侦听器,包括侦听器的名称以及监听的地址;支持输出为json格 式; ??POST /reset_counters :将所有计数器重 围为0; 不过,它只 会影响Server 本地 的输出, 对于已经发送到
外部存储系统的统 计数 据无效 ;

3)GET /config_dump:以json格式打印当前从Envoy 的各 种组 件 加载的 配置信息;

4)GET /ready:获取Server 就绪与否的状态,LIVE状态为200 , 否则为503;

4、集群统计信息中主机的状态说明

 

八、Envoy运行时配置概述

1)相较于静态资源配置来说,xDS API的动态配置机制使得Envoy的配置系统极具弹性;
  (1)但有时候配置的变动仅需要修改个别的功能特性,若通过xDS接口完成未免有些动静过大, Runtime便是面向这种场景的配置接口;
  (2)Runtime就是一个虚拟文件系统树,可通过一至多个本地文件系统目录、静态资源、 RTDS动态发现和Admin Interface进行定义和配置;
     每个配置称为一个Layer,因而也称为“Layered Runtime”,这些Layer最终叠加生效;

2)换句话说,Runtime是与Envoy一起部署的外置实时配置系统,用于支持更改配置设置而无需 重启Envoy或更改主配置;
  (1)运行时配置相关的运行时参数也称为“功能标志(feature flags )”或“决策者(decider)”; 
  (2)通过运行时参数更改配置将实时生效;
  
3)运行时配置的实现也称为运行时配置供应者;
  (1)Envoy当前支持的运行时配置的实现是由多个层级组成的虚拟文件系统
     Envoy在配置的目录中监视符号链接的交换空间,并在发生交换时重新加载文件树;
  (2)但Envoy会使用默认运行时值和“null”提供给程序以确保其正确运行,因此,运行时配置系统并 不必不可少;

4)启用Envoy的运行时配置机制需要在Bootstrap文件中予以启用和配置
  (1)定义在bootstrap配置文件中的layered_runtime 顶级字段之下
  (2)一旦在bootstrap中给出layered_runtime字段,则至少要定义出一个layer;

5)运行时配置用于指定包含重新加载配置元素的虚拟文件系统树
  (1)该虚拟文件可以通过静态引导配置、本地文件系统、管理控 制台和RTDS派生的叠加来 实现;
  (2)因此,可以将运行时视为由多个层组成的虚拟文件系统;
     在分层运行时的引导配置中指定各层级,后续的各 层级中的 运行设置 会覆盖较 早的层级 ;

配置格式:

layered_runtime: # 配置运行配置供应者,未指定时则使用null供应者,即所有参数均加载其默认值;
  layers: # 运行时的层级列表,后面的层将覆盖先前层上的配置;
  - name: ... # 运行时的层级名称,仅用于“GET /runtime”时的输出;
    static_layer: {...} # 静态运行时层级,遵循运行时probobuf JSON表示编码格式;不同于静态的xDS 资源,静态运行时层一样可被后面的层所覆盖;
    # 此项配置,以及后面三个层级类型彼此互斥,因此一个列表项中仅可定义一层;
    disk_layer: {...} # 基于本地磁盘的运行时层级;
      symlink_root: ... # 通过符号链接访问的文件系统树;
      subdirectory: ... # 指定要在根目录中加载的子目录;
      append_service_cluster: ... # 是否将服务集群附加至符号链接根目录下的子路径上;
    admin_layer: {...} # 管理控制台运行时层级,即通过/runtime管理端点查看,通过/runtime_modify管理端点修改的配置方式;
    rtds_layer: {...} # 运行时发现服务(runtime discovery service )层级,即通过xDS API 中的RTDS API动态发现相关的层级配置;
      name: ... # 在rtds_config 上为RTDS层订阅的资源;
      rtds_config:RTDS 的ConfigSource;

一个典型的配置示例,它定义了四个层级

layers:
- name: static_layer_0
  static_layer:
    health_check:
      min_interval: 5
- name: disk_layer_0
  disk_layer: { symlink_root: /srv/runtime/current, subdirectory: envoy }
- name: disk_layer_1
  disk_layer: { symlink_root: /srv/runtime/current, subdirectory: envoy_override, append_service_cluster: true }
- name: admin_layer_0
  admin_layer: {}

◆静态引导配置层级,直接指定配置的运行时参数及其值;
◆本地磁盘文件系统
◆本地磁盘文件系统,子目录覆盖(override_subdirectory)
◆管理控制台层级

九、Front Proxy

 在Envoy Mesh中,作为Front Proxy的Envoy通常是独立运行的进程,它将客户端请求代理至 Mesh中的各Service,而这些Service中的每个应用实例都会隐藏于一个Sidecar Proxy模式的envoy 实例背后。

十、TLS

 Envoy Mesh中的TLS模式大体有如下几种常用场景

1、Front Proxy面向下游客户端提供https服务,但Front Proxy、Mesh内部的各服务间依然使用http协议 
https →http

2、Front Proxy面向下游客户端提供https服务,而且Front Proxy、Mesh内部的各服务间也使用https协议
https →https
但是内部各Service间的通信也有如下两种情形
(1)仅客户端验证服务端证书
(2)客户端与服务端之间互相验证彼此的证书( mTLS)
注意:对于容器化的动态环境来说,证书预配和管理将成为显著难题

3、Front Proxy直接以TCP Proxy的代理模式,在下游客户端与上游服务端之间透传tls协议;
https-passthrough
集群内部的东西向流量同样工作于https协议模型

1、TLS Front Proxy

 仅需要配置Listener面向下游客户端提供tls通信,下面是Front Proxy Envoy 的配置示例

static_resources:
  listeners:
  - name: listener_http
    address:
      socket_address: { address: 0.0.0.0, port_value: 8443 }
    filter_chains: 
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: web_service_01
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: web_cluster_01 }
            http_filters: 
            - name: envoy.filters.http.router
      transport_socket:
        name: envoy.transport_sockets.tls
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
          common_tls_context:
            tls_certificates:
            # The following self-signed certificate pair is generated using:
            # $ openssl req -x509 -newkey rsa:2048 -keyout front-proxy.key -out front-proxy.crt -days 3650 -nodes -subj '/CN=www.magedu.com'
            - certificate_chain:
                filename: "/etc/envoy/certs/front-proxy.crt"
              private_key:
                filename: "/etc/envoy/certs/front-proxy.key“

2、Upstream and downstream TLS

除了Listener中面向下游提供tls通信,Front Proxy还要以tls协议与Envoy Mesh中的各Service建 立tls连接

下面是Envoy Mesh中的某Service的Sidecar Proxy Envoy的配置示例

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 443 }
    filter_chains:
    - filters:
     - name: envoy.filters.network.http_connection_manager 
       typed_config:
       ……
         http_filters:
         - name: envoy.filters.http.router
      transport_socket:
        name: envoy.transport_sockets.tls
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext 
          common_tls_context: 
            tls_certificates:
            - certificate_chain:
                filename: "/etc/envoy/certs/webserver.crt" 
              private_key:
                filename: "/etc/envoy/certs/webserver.key"

下面是Front Proxy Envoy 中的Cluster面向上游通信的配置示例

clusters:
  - name: web_cluster_01
    connect_timeout: 0.25s
    type: STATIC 
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: web_cluster_01
        endpoints:
        - lb_endpoints:
          - endpoint:
            address:
              socket_address: { address: 172.31.8.11, port_value: 443 }
          - endpoint:
            address:
              socket_address: { address: 172.31.8.12, port_value: 443 }
  transport_socket:
    name: envoy.transport_sockets.tls
    typed_config:
      "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.UpstreamTlsContext 
    ......

3、Front Proxy https passthrough

TLS passthrough模式的Front Proxy需要使用TCP Proxy类型的Listener,Cluster的相关配置中也 无需再指定transport_socket相关的配置。

但Envoy Mesh中各Service 需要基于tls提供服务。

static_resources:
  listeners:
  - name: listener_http
    address:
      socket_address: { address: 0.0.0.0, port_value: 8443 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.tcp_proxy
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy 
          cluster: web_cluster_01
          stat_prefix: https_passthrough
clusters:
  - name: web_cluster_01
    connect_timeout: 0.25s
    type: STATIC 
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: web_cluster_01 
        endpoints:
        - lb_endpoints:
          - endpoint:
            address:
              socket_address: { address: 172.31.9.11, port_value: 443 }
          - endpoint:
            address:
              socket_address: { address: 172.31.9.12, port_value: 443 }

十一、envoy静态配置实例

提示:为便于建立测试环境,示例将以 docker-compose工具编排运行,相关的代码位于

https://github.com/ikubernetes/servicemesh_in_practice.git

https://gitee.com/mageedu/servicemesh_in_practise

1、envoy-echo

telnet ip 端口,输入什么信息,会显示什么信息

envoy.yaml

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address:
        address: 0.0.0.0
        port_value: 8080
    filter_chains:
    - filters:
      - name: envoy.filters.network.echo

Dockerfile

FROM envoyproxy/envoy-alpine:v1.20.0
ADD envoy.yaml /etc/envoy/

docker-compose.yaml

version: '3.3'
  
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.20.0
    volumes:
    - ./envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.4.2
        aliases:
        - envoy-echo

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.4.0/24

运行envoy-echo实验

root@envoy:~/servicemesh_in_practise/Envoy-Basics/envoy-echo# docker-compose up

telnet envoy的ip+端口

root@test:~# telnet 172.31.4.2 8080
Trying 172.31.4.2...
Connected to 172.31.4.2.
Escape character is '^]'.
#输入什么,会显示什么
root@test:~# telnet 172.31.4.2 8080
Trying 172.31.4.2...
Connected to 172.31.4.2.
Escape character is '^]'.
hello
hello
world
world

修改envoy配置文件在envoy容器中测试

root@envoy:~/servicemesh_in_practise/Envoy-Basics/envoy-echo# docker-compose up

编辑envoy-v2.yaml

admin:
  access_log_path: /dev/null
  address:
    socket_address:
      address: 127.0.0.1
      port_value: 0

static_resources:
  clusters:
    name: cluster_0
    connect_timeout: 0.25s
    load_assignment:
      cluster_name: cluster_0
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: 127.0.0.1
                port_value: 0

  listeners:
  - name: listener_0
    address:
      socket_address:
        address: 127.0.0.1
        port_value: 8080
    filter_chains:
    - filters:
      - name: envoy.filters.network.echo

docker-cpmpose.yaml

version: '3.3'
  
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.20.0
    volumes:
    - ./envoy-v2.yaml:/etc/envoy/envoy.yaml  #使用envoy-v2.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.4.2
        aliases:
        - envoy-echo

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.4.0/24

再次运行

root@envoy:~/servicemesh_in_practise/Envoy-Basics/envoy-echo# docker-compose up

进到容器

root@test:/apps/servicemesh_in_practise/Envoy-Basics/envoy-echo# docker-compose exec envoy sh
/ # nc 127.0.0.1 8080
abv
abv
ni hao world
ni hao world
#输入什么,就显示什么

2、http-ingress

 实验环境

两个Service:

envoy:Sidecar Proxy
webserver01:第一个后端服务,地址为127.0.0.1

访问enovy的ip+端口,便可访问enovy后端的服务

envoy.yaml

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: web_service_1
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: local_cluster }
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: local_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: local_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address: { address: 127.0.0.1, port_value: 8080 }

docker-compose.yaml

version: '3'
  
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    #docker-compose up报error initializing configuration '/etc/envoy/envoy.yaml': cannot bind '0.0.0.0:80': Permission denied需要添加该环境变量

    volumes:
    - ./envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.3.2
        aliases:
        - ingress

  webserver01:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    network_mode: "service:envoy"
    depends_on:
    - envoy

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.3.0/24

访问172.31.3.2:80,可以被envoy转发到后端webserver01上

root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/http-ingress# docker-compose up
#重新克隆一个窗口多访问几次
root@test:/apps/servicemesh_in_practise/Envoy-Basics/http-ingress# curl 172.31.3.2
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: d4eda0b2b84c, ServerIP: 172.31.3.2!
root@test:/apps/servicemesh_in_practise/Envoy-Basics/http-ingress# curl 172.31.3.2
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: dc4bd7a1316f, ServerIP: 172.31.3.2!
root@test:/apps/servicemesh_in_practise/Envoy-Basics/http-ingress# curl 172.31.3.2
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: dc4bd7a1316f, ServerIP: 172.31.3.2!
root@test:/apps/servicemesh_in_practise/Envoy-Basics/http-ingress# curl 172.31.3.2
iKubernetes demoapp v1.0 !! ClientIP: 127.0.0.1, ServerName: dc4bd7a1316f, ServerIP: 172.31.3.2!

#在前台运行的envoy程序上查看日志信息
......
webserver01_1  |  * Running on http://127.0.0.1:8080/ (Press CTRL+C to quit)
webserver01_1  | 127.0.0.1 - - [01/Dec/2021 08:36:37] "GET / HTTP/1.1" 200 -
webserver01_1  | 127.0.0.1 - - [01/Dec/2021 08:38:49] "GET / HTTP/1.1" 200 -
webserver01_1  | 127.0.0.1 - - [01/Dec/2021 08:38:50] "GET / HTTP/1.1" 200 -

3、http-egress

实验环境

三个Service:

envoy:Front Proxy,地址为172.31.4.2
webserver01:第一个外部服务,地址为172.31.4.11
webserver02:第二个外部服务,地址为172.31.4.12

访问envoy的ip,可以转发到webserver01和webserver02上

envoy.yaml

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 127.0.0.1, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: web_service_1
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: web_cluster }
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: web_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: web_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address: { address: 172.31.4.11, port_value: 80 }
        - endpoint:
            address:
              socket_address: { address: 172.31.4.12, port_value: 80 }

docker-compose.yaml

version: '3.3'

services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
    - ./envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.4.2
        aliases:
        - front-proxy
    depends_on:
    - webserver01
    - webserver02

  client:
    image: ikubernetes/admin-toolbox:v1.0
    network_mode: "service:envoy"
    depends_on:
    - envoy

  webserver01:
    image: ikubernetes/demoapp:v1.0
    hostname: webserver01
    networks:
      envoymesh:
        ipv4_address: 172.31.4.11
        aliases:
        - webserver01

  webserver02:
    image: ikubernetes/demoapp:v1.0
    hostname: webserver02
    networks:
      envoymesh:
        ipv4_address: 172.31.4.12
        aliases:
        - webserver02

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.4.0/24

实验验证

root@envoy:/servicemesh_in_practise-develop/Envoy-Basics/http-egress# docker-compose up

另外克隆一个窗口,进入容器

root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/http-egress# docker-compose exec client sh
[root@b8f9b62f2771 /]# 
[root@b8f9b62f2771 /]# curl 127.0.0.1
iKubernetes demoapp v1.0 !! ClientIP: 172.31.4.2, ServerName: webserver01, ServerIP: 172.31.4.11!
[root@b8f9b62f2771 /]# curl 127.0.0.1
iKubernetes demoapp v1.0 !! ClientIP: 172.31.4.2, ServerName: webserver02, ServerIP: 172.31.4.12!
[root@b8f9b62f2771 /]# curl 127.0.0.1
iKubernetes demoapp v1.0 !! ClientIP: 172.31.4.2, ServerName: webserver01, ServerIP: 172.31.4.11!
[root@b8f9b62f2771 /]# curl 127.0.0.1
iKubernetes demoapp v1.0 !! ClientIP: 172.31.4.2, ServerName: webserver02, ServerIP: 172.31.4.12!
#在容器访问127.0.0.1,envoy会把请求以轮询的方式转发到webserver01和webserver02上

4、http-front-proxy

 实验环境

三个Service:

envoy:Front Proxy,地址为172.31.2.2
webserver01:第一个后端服务,地址为172.31.2.11
webserver02:第二个后端服务,地址为172.31.2.12
#把域名www.ik8s.io和www.magedu.com映射到172.31.2.2
#访问域名www.ik8s.io会轮询转发到webserver01和webserver02上
#访问域名www.magedu.com会跳转到www.ik8s.io,并轮询转发到webserver01和webserver02上

envoy.yaml

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: web_service_1
              domains: ["*.ik8s.io", "ik8s.io"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: local_cluster }
            - name: web_service_2
              domains: ["*.magedu.com",“magedu.com"]
              routes:
              - match: { prefix: "/" }
                redirect:
                  host_redirect: "www.ik8s.io"
          http_filters:
          - name: envoy.filters.http.router
  clusters:
  - name: local_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: local_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address: { address: 172.31.2.11, port_value: 8080 }
        - endpoint:
            address:
              socket_address: { address: 172.31.2.12, port_value: 8080 }

docker-compose.yaml

version: '3.3'
  
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
    - ./envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.2.2
        aliases:
        - front-proxy
    depends_on:
    - webserver01
    - webserver02

  webserver01:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
    hostname: webserver01
    networks:
      envoymesh:
        ipv4_address: 172.31.2.11
        aliases:
        - webserver01

  webserver02:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
    hostname: webserver02
    networks:
      envoymesh:
        ipv4_address: 172.31.2.12
        aliases:
        - webserver02

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.2.0/24

实验验证

root@envoy:/servicemesh_in_practise-develop/Envoy-Basics/http-front-proxy# docker-compose up

另外克隆一个窗口

#访问域名www.ik8s.io,envoy会以轮询的方式转发到webserver01和webserver02上
root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/http-front-proxy# curl -H "host: www.ik8s.io" 172.31.2.2
iKubernetes demoapp v1.0 !! ClientIP: 172.31.2.2, ServerName: webserver01, ServerIP: 172.31.2.11!
root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/http-front-proxy# curl -H "host: www.ik8s.io" 172.31.2.2
iKubernetes demoapp v1.0 !! ClientIP: 172.31.2.2, ServerName: webserver02, ServerIP: 172.31.2.12!

#访问域名www.magedu.com会跳转到www.ik8s.io上
root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/http-front-proxy# curl -I -H "host: www.magedu.com" 172.31.2.2
HTTP/1.1 301 Moved Permanently
location: http://www.ik8s.io/   #跳转到了www.ik8s.io上
date: Wed, 01 Dec 2021 14:07:55 GMT
server: envoy
transfer-encoding: chunked

5、tcp-front-proxy

 实验环境

 三个Service:

envoy:Front Proxy,地址为172.31.1.2
webserver01:第一个后端服务,地址为172.31.1.11
webserver02:第二个后端服务,地址为172.31.1.12
#访问envoy的ip:172.31.1.2,会以轮询的方式转发到webserver01和webserver02上

envoy.yaml

static_resources:
  listeners:
    name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.tcp_proxy
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
          stat_prefix: tcp
          cluster: local_cluster

  clusters:
  - name: local_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: local_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address: { address: 172.31.1.11, port_value: 8080 }
        - endpoint:
            address:
              socket_address: { address: 172.31.1.12, port_value: 8080 }

docker-compose.yaml

version: '3.3'

services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
    - ./envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.1.2
        aliases:
        - front-proxy
    depends_on:
    - webserver01
    - webserver02

  webserver01:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
    hostname: webserver01
    networks:
      envoymesh:
        ipv4_address: 172.31.1.11
        aliases:
        - webserver01

  webserver02:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
    hostname: webserver02
    networks:
      envoymesh:
        ipv4_address: 172.31.1.12
        aliases:
        - webserver02

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.1.0/24

实验验证

root@envoy:/servicemesh_in_practise-develop/Envoy-Basics/tcp-front-proxy# docker-compose up

另外克隆一个窗口,访问envoy的ip:17231.1.2会已轮询的方式转发到webserver01和webserver02上

root@envoy:/servicemesh_in_practise-develop/Envoy-Basics/tcp-front-proxy# curl 172.31.1.2
iKubernetes demoapp v1.0 !! ClientIP: 172.31.1.2, ServerName: webserver01, ServerIP: 172.31.1.11!
root@envoy:/servicemesh_in_practise-develop/Envoy-Basics/tcp-front-proxy# curl 172.31.1.2
iKubernetes demoapp v1.0 !! ClientIP: 172.31.1.2, ServerName: webserver02, ServerIP: 172.31.1.12!

6、admin-interface

 实验环境

三个Service:

envoy:Front Proxy,地址为172.31.5.2
webserver01:第一个后端服务,地址为172.31.5.11
webserver02:第二个后端服务,地址为172.31.5.12
#访问envoy的9901端口可以个获取相应的信息

envoy.yaml

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0   #在生产环境配置127.0.0.1;否则不安全
       port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: web_service_1
              domains: ["*.ik8s.io", "ik8s.io"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: local_cluster }
            - name: web_service_2
              domains: ["*.magedu.com",“magedu.com"]
              routes:
              - match: { prefix: "/" }
                redirect:
                  host_redirect: "www.ik8s.io"
          http_filters:
          - name: envoy.filters.http.router
   clusters:
  - name: local_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: local_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address: { address: 172.31.5.11, port_value: 8080 }
        - endpoint:
            address:
              socket_address: { address: 172.31.5.12, port_value: 8080 }

docker-compose.yaml

services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
    - ./envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.5.2
        aliases:
        - front-proxy
    depends_on:
    - webserver01
    - webserver02
ports:
- "9901:9901" webserver01: image: ikubernetes
/demoapp:v1.0 environment: - PORT=8080 hostname: webserver01 networks: envoymesh: ipv4_address: 172.31.5.11 aliases: - webserver01 webserver02: image: ikubernetes/demoapp:v1.0 environment: - PORT=8080 hostname: webserver02 networks: envoymesh: ipv4_address: 172.31.5.12 aliases: - webserver02 networks: envoymesh: driver: bridge ipam: config: - subnet: 172.31.5.0/24

实验验证

root@envoy:/servicemesh_in_practise-develop/Envoy-Basics/admin-interface# docker-compose up

另外克隆一个窗口访问172.31.5.1:9901

#显示帮助信息
root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/admin-interface# curl 172.31.5.2:9901/help
admin commands are:
  /: Admin home page
  /certs: print certs on machine
  /clusters: upstream cluster status
  /config_dump: dump current Envoy configs (experimental)
  /contention: dump current Envoy mutex contention stats (if enabled)
  /cpuprofiler: enable/disable the CPU profiler
  /drain_listeners: drain listeners
  /healthcheck/fail: cause the server to fail health checks
  /healthcheck/ok: cause the server to pass health checks
  /heapprofiler: enable/disable the heap profiler
  /help: print out list of admin commands
  /hot_restart_version: print the hot restart compatibility version
  /init_dump: dump current Envoy init manager information (experimental)
  /listeners: print listener info
  /logging: query/change logging levels
  /memory: print current allocation/heap usage
  /quitquitquit: exit the server
  /ready: print server state, return 200 if LIVE, otherwise return 503
  /reopen_logs: reopen access logs
  /reset_counters: reset all counters to zero
  /runtime: print runtime values
  /runtime_modify: modify runtime values
  /server_info: print server version/status information
  /stats: print server stats
  /stats/prometheus: print server stats in prometheus format
  /stats/recentlookups: Show recent stat-name lookups
  /stats/recentlookups/clear: clear list of stat-name lookups and counter
  /stats/recentlookups/disable: disable recording of reset stat-name lookup names
  /stats/recentlookups/enable: enable recording of reset stat-name lookup names

# 查看完成的配置信息
root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/admin-interface# curl 172.31.5.2:9901/config_dump
......
         }
        ]
       },
       {
        "name": "web_service_2",
        "domains": [
         "*.magedu.com",
         "“magedu.com\""
        ],
        "routes": [
         {
          "match": {
           "prefix": "/"
          },
          "redirect": {
           "host_redirect": "www.ik8s.io"
          }
         }
        ]
       }
      ]
     },
     "last_updated": "2021-12-01T14:26:32.586Z"
......

#列出各Listener
root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/admin-interface# curl 172.31.5.2:9901/listeners
listener_0::0.0.0.0:80

#列出各cluster
root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/admin-interface# curl 172.31.5.2:9901/clusters
local_cluster::observability_name::local_cluster
local_cluster::default_priority::max_connections::1024
local_cluster::default_priority::max_pending_requests::1024
local_cluster::default_priority::max_requests::1024
local_cluster::default_priority::max_retries::3
local_cluster::high_priority::max_connections::1024
local_cluster::high_priority::max_pending_requests::1024
local_cluster::high_priority::max_requests::1024
local_cluster::high_priority::max_retries::3
local_cluster::added_via_api::false
local_cluster::172.31.5.11:8080::cx_active::0
local_cluster::172.31.5.11:8080::cx_connect_fail::0
local_cluster::172.31.5.11:8080::cx_total::0
local_cluster::172.31.5.11:8080::rq_active::0
local_cluster::172.31.5.11:8080::rq_error::0
local_cluster::172.31.5.11:8080::rq_success::0
local_cluster::172.31.5.11:8080::rq_timeout::0
local_cluster::172.31.5.11:8080::rq_total::0
local_cluster::172.31.5.11:8080::hostname::
local_cluster::172.31.5.11:8080::health_flags::healthy
local_cluster::172.31.5.11:8080::weight::1
local_cluster::172.31.5.11:8080::region::
local_cluster::172.31.5.11:8080::zone::
local_cluster::172.31.5.11:8080::sub_zone::
local_cluster::172.31.5.11:8080::canary::false
local_cluster::172.31.5.11:8080::priority::0
local_cluster::172.31.5.11:8080::success_rate::-1.0
local_cluster::172.31.5.11:8080::local_origin_success_rate::-1.0
local_cluster::172.31.5.12:8080::cx_active::0
local_cluster::172.31.5.12:8080::cx_connect_fail::0
local_cluster::172.31.5.12:8080::cx_total::0
local_cluster::172.31.5.12:8080::rq_active::0
local_cluster::172.31.5.12:8080::rq_error::0
local_cluster::172.31.5.12:8080::rq_success::0
local_cluster::172.31.5.12:8080::rq_timeout::0
local_cluster::172.31.5.12:8080::rq_total::0
local_cluster::172.31.5.12:8080::hostname::
local_cluster::172.31.5.12:8080::health_flags::healthy
local_cluster::172.31.5.12:8080::weight::1
local_cluster::172.31.5.12:8080::region::
local_cluster::172.31.5.12:8080::zone::
local_cluster::172.31.5.12:8080::sub_zone::
local_cluster::172.31.5.12:8080::canary::false
local_cluster::172.31.5.12:8080::priority::0
local_cluster::172.31.5.12:8080::success_rate::-1.0
local_cluster::172.31.5.12:8080::local_origin_success_rate::-1.0

在宿主机浏览器中访问ip:9901

7、layered-runtime

 实验环境

三个Service:

envoy:Front Proxy,地址为172.31.14.2
webserver01:第一个后端服务,地址为172.31.14.11
webserver02:第二个后端服务,地址为172.31.14.12

envoy.yaml

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

layered_runtime:
  layers:
  - name: static_layer_0
    static_layer:
      health_check:
        min_interval: 5
  - name: admin_layer_0
    admin_layer: {}

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: web_service_1
              domains: ["*.ik8s.io", "ik8s.io"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: local_cluster }
            - name: web_service_2
              domains: ["*.magedu.com",“magedu.com"]
              routes:
              - match: { prefix: "/" }
                redirect:
                  host_redirect: "www.ik8s.io"
          http_filters:
          - name: envoy.filters.http.router
  clusters:
  - name: local_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: local_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address: { address: 172.31.14.11, port_value: 8080 }
        - endpoint:
            address:
              socket_address: { address: 172.31.14.12, port_value: 8080 }

docker-compose.yaml

version: '3.3'
  
services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.20.0
    environment:
      - ENVOY_UID=0
    volumes:
    - ./envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.14.2
        aliases:
        - front-proxy
    depends_on:
    - webserver01
    - webserver02

  webserver01:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
    hostname: webserver01
    networks:
      envoymesh:
        ipv4_address: 172.31.14.11
        aliases:
        - webserver01

  webserver02:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
    hostname: webserver02
    networks:
      envoymesh:
        ipv4_address: 172.31.14.12
        aliases:
        - webserver02

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.14.0/24

实验验证

root@envoy:/servicemesh_in_practise-develop/Envoy-Basics/layered-runtime# docker-compose up

另外克隆一个窗口

root@test:/apps/servicemesh_in_practise-develop/Envoy-Basics/layered-runtime# curl 172.31.14.2:9901/runtime
{
 "entries": {
  "health_check.min_interval": {
   "final_value": "5",
   "layer_values": [
    "5",
    ""
   ]
  }
 },
 "layers": [
  "static_layer_0",
  "admin_layer_0"
 ]
}