profile
viewpoint

suwang48404/amazon-vpc-cni-k8s 0

Networking plugin repository for pod networking in Kubernetes using Elastic Network Interfaces on AWS

suwang48404/antrea 0

A Kubernetes networking solution based on Open vSwitch

suwang48404/azure-container-networking 0

Azure Container Networking Solutions for Linux and Windows Containers

suwang48404/cli 0

The Docker CLI

suwang48404/client-go 0

Go client for Kubernetes.

suwang48404/kind 0

Kubernetes IN Docker - local clusters for testing Kubernetes

suwang48404/libnetwork 0

networking for containers

suwang48404/moby 0

Moby Project - a collaborative project for the container ecosystem to assemble container-based systems

suwang48404/swarmkit 0

A toolkit for orchestrating distributed systems at any scale. It includes primitives for node discovery, raft-based consensus, task scheduling and more.

pull request commentvmware-tanzu/antrea

Add API types for Namespaced Antrea NetworkPolicy

@abhiraut @jianjuns @tnqn

I am fine with the change. thx

abhiraut

comment created time in 3 days

Pull request review commentvmware-tanzu/antrea

Add API types for Namespaced Antrea NetworkPolicy

+// Copyright 2020 Antrea Authors+//+// Licensed under the Apache License, Version 2.0 (the "License");+// you may not use this file except in compliance with the License.+// You may obtain a copy of the License at+//+//     http://www.apache.org/licenses/LICENSE-2.0+//+// Unless required by applicable law or agreed to in writing, software+// distributed under the License is distributed on an "AS IS" BASIS,+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+// See the License for the specific language governing permissions and+// limitations under the License.++package v1beta1++import (+	v1 "k8s.io/api/core/v1"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+	"k8s.io/apimachinery/pkg/util/intstr"+)++// +genclient+// +genclient:noStatus+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object++type ExternalEntity struct {+	metav1.TypeMeta `json:",inline"`+	// Standard metadata of the object.+	metav1.ObjectMeta `json:"metadata,omitempty"`+	// Desired state of the external entity.+	Spec ExternalEntitySpec `json:"spec"`+}++// ExternalEntitySpec defines the desired state for ExternalEntity.+type ExternalEntitySpec struct {+	// Endpoints is a list of external endpoints associated with this entity.+	Endpoints []ExternalEndpoint `json:"endpoints"`+	// ExternalNode is the opaque identifier of the agent/controller responsible+	// for additional computation of this external entity.+	ExternalNode string `json:"externalNode"`+}++// ExternalEndpoint refers to an endpoint associated with the ExternalEntity.+type ExternalEndpoint struct {+	// IP associated with this endpoint.+	IP IPAddress `json:"ip"`+	// Name identifies this endpoint. Could be the interface name in case of VMs.+	// +optional+	Name string `json:"name"`+	// Ports maintain the list of named ports.+	Ports []NamedPort `json:"ports"`+}++// NamedPort describes the port and protocol to match in a rule.+type NamedPort struct {+	// The protocol (TCP, UDP, or SCTP) which traffic must match.+	// If not specified, this field defaults to TCP.+	// +optional+	Protocol *v1.Protocol `json:"protocol"`+	// The port on the given protocol. This can either be a numerical+	// or named port on a Pod. If this field is not provided, this+	// matches all port names and numbers.+	// +optional+	Port *intstr.IntOrString `json:"port"`

can we separate it to Port and Name instead? Sometime both name and port is required. For instance, we can do

protocol: tcp Port:8888 Name:ssh

abhiraut

comment created time in 3 days

Pull request review commentvmware-tanzu/antrea

Add API types for Namespaced Antrea NetworkPolicy

+// Copyright 2020 Antrea Authors+//+// Licensed under the Apache License, Version 2.0 (the "License");+// you may not use this file except in compliance with the License.+// You may obtain a copy of the License at+//+//     http://www.apache.org/licenses/LICENSE-2.0+//+// Unless required by applicable law or agreed to in writing, software+// distributed under the License is distributed on an "AS IS" BASIS,+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+// See the License for the specific language governing permissions and+// limitations under the License.++package v1beta1++import (+	v1 "k8s.io/api/core/v1"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+	"k8s.io/apimachinery/pkg/util/intstr"+)++// +genclient+// +genclient:noStatus+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object++type ExternalEntity struct {+	metav1.TypeMeta `json:",inline"`+	// Standard metadata of the object.+	metav1.ObjectMeta `json:"metadata,omitempty"`+	// Desired state of the external entity.+	Spec ExternalEntitySpec `json:"spec"`+}++// ExternalEntitySpec defines the desired state for ExternalEntity.+type ExternalEntitySpec struct {+	// Endpoints is a list of external endpoints associated with this entity.+	Endpoints []ExternalEndpoint `json:"endpoints"`+	// ExternalNode is the opaque identifier of the agent/controller responsible+	// for additional computation of this external entity.+	ExternalNode string `json:"scope"`

json name mismatch

abhiraut

comment created time in 4 days

pull request commentvmware-tanzu/antrea

Add API types for Namespaced Antrea NetworkPolicy

Hi @abhiraut I wonder if we should also update existing antrea internal policy to includes references to external entity alone with Pod reference?

abhiraut

comment created time in 6 days

Pull request review commentvmware-tanzu/antrea

Add API types for Namespaced Antrea NetworkPolicy

+// Copyright 2020 Antrea Authors+//+// Licensed under the Apache License, Version 2.0 (the "License");+// you may not use this file except in compliance with the License.+// You may obtain a copy of the License at+//+//     http://www.apache.org/licenses/LICENSE-2.0+//+// Unless required by applicable law or agreed to in writing, software+// distributed under the License is distributed on an "AS IS" BASIS,+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+// See the License for the specific language governing permissions and+// limitations under the License.++package v1beta1++import (+	v1 "k8s.io/api/core/v1"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+	"k8s.io/apimachinery/pkg/util/intstr"+)++// +genclient+// +genclient:noStatus+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object++type AntreaNetworkPolicy struct {+	metav1.TypeMeta `json:",inline"`+	// Standard metadata of the object.+	metav1.ObjectMeta `json:"metadata,omitempty"`++	// Specification of the desired behavior of AntreaNetworkPolicy.+	Spec AntreaNetworkPolicySpec `json:"spec"`+}++// AntreaNetworkPolicySpec defines the desired state for AntreaNetworkPolicy.+type AntreaNetworkPolicySpec struct {+	// Priority specfies the order of the AntreaNetworkPolicy relative to+	// other AntreaNetworkPolicies.+	Priority int32 `json:"priority"`+	// Select workloads on which the rules will be applied to.+	AppliedTo []NetworkPolicyPeer `json:"appliedTo"`+	// Set of ingress rules evaluated based on the order in which they are set.+	// Currently Ingress rule supports setting the `From` field but not the `To`+	// field within a Rule.+	// +optional+	Ingress []Rule `json:"ingress"`+	// Set of egress rules evaluated based on the order in which they are set.+	// Currently Egress rule supports setting the `To` field but not the `From`+	// field within a Rule.+	// +optional+	Egress []Rule `json:"egress"`+}++// Rule describes the traffic allowed to/from the workloads selected by+// Spec.AppliedTo. Based on the action specified in the rule, traffic is either+// allowed or denied which exactly match the specified ports and protocol.+type Rule struct {+	// Action specifies the action to be applied on the rule. Defaults to+	// ALLOW action.+	// +optional+	Action *RuleAction `json:"action"`+	// Set of port and protocol allowed/denied by the rule. If this field is unset+	// or empty, this rule matches all ports.+	// +optional+	Ports []NetworkPolicyPort `json:"ports"`+	// Rule is matched if traffic originates from workloads selected by+	// this field. If this field is empty, this rule matches all sources.+	// +optional+	From []NetworkPolicyPeer `json:"from"`+	// Rule is matched if traffic is intended for workloads selected by+	// this field. If this field is empty or missing, this rule matches all+	// destinations.+	// +optional+	To []NetworkPolicyPeer `json:"to"`+}++// NetworkPolicyPeer describes the grouping selector of workloads.+type NetworkPolicyPeer struct {+	// IPBlock describes the IPAddresses/IPBlocks that is matched in to/from.+	// IPBlock cannot be set as part of the AppliedTo field+	// Cannot be set with any other selector.+	// +optional+	IPBlock *IPBlock `json:"ipBlock,omitempty"`+	// Select Pods from AntreaNetworkPolicy's Namespace as workloads in+	// AppliedTo/To/From fields. If set with NamespaceSelector, Pods are+	// matched from Namespaces matched by the NamespaceSelector.+	// Cannot be set with any other selector except NamespaceSelector.+	// +optional+	PodSelector *metav1.LabelSelector `json:"podSelector,omitempty"`+	// Select all Pods from Namespaces matched by this selector, as+	// workloads in To/From fields. If set with PodSelector,+	// Pods are matched from Namespaces matched by the NamespaceSelector.+	// Cannot be set with any other selector except PodSelector or+	// ExternalEntitySelector.+	// +optional+	NamespaceSelector *metav1.LabelSelector `json:"namespaceSelector,omitempty"`+	// Select ExternalEntities from AntreaNetworkPolicy's Namespace as workloads+	// in AppliedTo/To/From fields. If set with NamespaceSelector,+	// ExternalEntities are matched from Namespaces matched by the+	// NamespaceSelector.+	// Cannot be set with any other selector except NamespaceSelector.+	ExternalEntitySelector *metav1.LabelSelector `json:"externalEntitySelector,omitempty"`

I see. thx

abhiraut

comment created time in 7 days

Pull request review commentvmware-tanzu/antrea

Add API types for Namespaced Antrea NetworkPolicy

+// Copyright 2020 Antrea Authors+//+// Licensed under the Apache License, Version 2.0 (the "License");+// you may not use this file except in compliance with the License.+// You may obtain a copy of the License at+//+//     http://www.apache.org/licenses/LICENSE-2.0+//+// Unless required by applicable law or agreed to in writing, software+// distributed under the License is distributed on an "AS IS" BASIS,+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+// See the License for the specific language governing permissions and+// limitations under the License.++package v1beta1++import (+	v1 "k8s.io/api/core/v1"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+	"k8s.io/apimachinery/pkg/util/intstr"+)++// +genclient+// +genclient:noStatus+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object++type AntreaNetworkPolicy struct {+	metav1.TypeMeta `json:",inline"`+	// Standard metadata of the object.+	metav1.ObjectMeta `json:"metadata,omitempty"`++	// Specification of the desired behavior of AntreaNetworkPolicy.+	Spec AntreaNetworkPolicySpec `json:"spec"`+}++// AntreaNetworkPolicySpec defines the desired state for AntreaNetworkPolicy.+type AntreaNetworkPolicySpec struct {+	// Priority specfies the order of the AntreaNetworkPolicy relative to+	// other AntreaNetworkPolicies.+	Priority int32 `json:"priority"`+	// Select workloads on which the rules will be applied to.+	AppliedTo []NetworkPolicyPeer `json:"appliedTo"`+	// Set of ingress rules evaluated based on the order in which they are set.+	// Currently Ingress rule supports setting the `From` field but not the `To`+	// field within a Rule.+	// +optional+	Ingress []Rule `json:"ingress"`+	// Set of egress rules evaluated based on the order in which they are set.+	// Currently Egress rule supports setting the `To` field but not the `From`+	// field within a Rule.+	// +optional+	Egress []Rule `json:"egress"`+}++// Rule describes the traffic allowed to/from the workloads selected by+// Spec.AppliedTo. Based on the action specified in the rule, traffic is either+// allowed or denied which exactly match the specified ports and protocol.+type Rule struct {+	// Action specifies the action to be applied on the rule. Defaults to+	// ALLOW action.+	// +optional+	Action *RuleAction `json:"action"`+	// Set of port and protocol allowed/denied by the rule. If this field is unset+	// or empty, this rule matches all ports.+	// +optional+	Ports []NetworkPolicyPort `json:"ports"`+	// Rule is matched if traffic originates from workloads selected by+	// this field. If this field is empty, this rule matches all sources.+	// +optional+	From []NetworkPolicyPeer `json:"from"`+	// Rule is matched if traffic is intended for workloads selected by+	// this field. If this field is empty or missing, this rule matches all+	// destinations.+	// +optional+	To []NetworkPolicyPeer `json:"to"`+}++// NetworkPolicyPeer describes the grouping selector of workloads.+type NetworkPolicyPeer struct {+	// IPBlock describes the IPAddresses/IPBlocks that is matched in to/from.+	// IPBlock cannot be set as part of the AppliedTo field+	// Cannot be set with any other selector.+	// +optional+	IPBlock *IPBlock `json:"ipBlock,omitempty"`+	// Select Pods from AntreaNetworkPolicy's Namespace as workloads in+	// AppliedTo/To/From fields. If set with NamespaceSelector, Pods are+	// matched from Namespaces matched by the NamespaceSelector.+	// Cannot be set with any other selector except NamespaceSelector.+	// +optional+	PodSelector *metav1.LabelSelector `json:"podSelector,omitempty"`+	// Select all Pods from Namespaces matched by this selector, as+	// workloads in To/From fields. If set with PodSelector,+	// Pods are matched from Namespaces matched by the NamespaceSelector.+	// Cannot be set with any other selector except PodSelector or+	// ExternalEntitySelector.+	// +optional+	NamespaceSelector *metav1.LabelSelector `json:"namespaceSelector,omitempty"`+	// Select ExternalEntities from AntreaNetworkPolicy's Namespace as workloads+	// in AppliedTo/To/From fields. If set with NamespaceSelector,+	// ExternalEntities are matched from Namespaces matched by the+	// NamespaceSelector.+	// Cannot be set with any other selector except NamespaceSelector.+	ExternalEntitySelector *metav1.LabelSelector `json:"externalEntitySelector,omitempty"`

where is ExternalEntitySelector defined?

abhiraut

comment created time in 7 days

Pull request review commentvmware-tanzu/antrea

Add API types for Namespaced Antrea NetworkPolicy

+// Copyright 2020 Antrea Authors+//+// Licensed under the Apache License, Version 2.0 (the "License");+// you may not use this file except in compliance with the License.+// You may obtain a copy of the License at+//+//     http://www.apache.org/licenses/LICENSE-2.0+//+// Unless required by applicable law or agreed to in writing, software+// distributed under the License is distributed on an "AS IS" BASIS,+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+// See the License for the specific language governing permissions and+// limitations under the License.++package v1beta1++import (+	v1 "k8s.io/api/core/v1"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+	"k8s.io/apimachinery/pkg/util/intstr"+)++// +genclient+// +genclient:noStatus+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object++type AntreaNetworkPolicy struct {+	metav1.TypeMeta `json:",inline"`+	// Standard metadata of the object.+	metav1.ObjectMeta `json:"metadata,omitempty"`++	// Specification of the desired behavior of AntreaNetworkPolicy.+	Spec AntreaNetworkPolicySpec `json:"spec"`+}++// AntreaNetworkPolicySpec defines the desired state for AntreaNetworkPolicy.+type AntreaNetworkPolicySpec struct {+	// Priority specfies the order of the AntreaNetworkPolicy relative to+	// other AntreaNetworkPolicies.+	Priority int32 `json:"priority"`+	// Select workloads on which the rules will be applied to.+	AppliedTo []NetworkPolicyPeer `json:"appliedTo"`+	// Set of ingress rules evaluated based on the order in which they are set.+	// Currently Ingress rule supports setting the `From` field but not the `To`+	// field within a Rule.+	// +optional+	Ingress []Rule `json:"ingress"`+	// Set of egress rules evaluated based on the order in which they are set.+	// Currently Egress rule supports setting the `To` field but not the `From`+	// field within a Rule.+	// +optional+	Egress []Rule `json:"egress"`+}++// Rule describes the traffic allowed to/from the workloads selected by+// Spec.AppliedTo. Based on the action specified in the rule, traffic is either+// allowed or denied which exactly match the specified ports and protocol.+type Rule struct {+	// Action specifies the action to be applied on the rule. Defaults to+	// ALLOW action.+	// +optional+	Action *RuleAction `json:"action"`+	// Set of port and protocol allowed/denied by the rule. If this field is unset+	// or empty, this rule matches all ports.+	// +optional+	Ports []NetworkPolicyPort `json:"ports"`+	// Rule is matched if traffic originates from workloads selected by+	// this field. If this field is empty, this rule matches all sources.+	// +optional+	From []NetworkPolicyPeer `json:"from"`+	// Rule is matched if traffic is intended for workloads selected by+	// this field. If this field is empty or missing, this rule matches all+	// destinations.+	// +optional+	To []NetworkPolicyPeer `json:"to"`+}++// NetworkPolicyPeer describes the grouping selector of workloads.+type NetworkPolicyPeer struct {+	// IPBlock describes the IPAddresses/IPBlocks that is matched in to/from.+	// IPBlock cannot be set as part of the AppliedTo field+	// Cannot be set with any other selector.+	// +optional+	IPBlock *IPBlock `json:"ipBlock,omitempty"`+	// Select Pods from AntreaNetworkPolicy's Namespace as workloads in+	// AppliedTo/To/From fields. If set with NamespaceSelector, Pods are+	// matched from Namespaces matched by the NamespaceSelector.+	// Cannot be set with any other selector except NamespaceSelector.+	// +optional+	PodSelector *metav1.LabelSelector `json:"podSelector,omitempty"`+	// Select all Pods from Namespaces matched by this selector, as+	// workloads in To/From fields. If set with PodSelector,+	// Pods are matched from Namespaces matched by the NamespaceSelector.+	// Cannot be set with any other selector except PodSelector or+	// ExternalEntitySelector.+	// +optional+	NamespaceSelector *metav1.LabelSelector `json:"namespaceSelector,omitempty"`+	// Select ExternalEntities from AntreaNetworkPolicy's Namespace as workloads+	// in AppliedTo/To/From fields. If set with NamespaceSelector,+	// ExternalEntities are matched from Namespaces matched by the+	// NamespaceSelector.+	// Cannot be set with any other selector except NamespaceSelector.+	ExternalEntitySelector *metav1.LabelSelector `json:"externalEntitySelector,omitempty"`+}++// IPBlock describes a particular CIDR (Ex. "192.168.1.1/24") that is allowed+// or denied to/from the workloads matched by a Spec.AppliedTo.+type IPBlock struct {+	// CIDR is a string representing the IP Block+	// Valid examples are "192.168.1.1/24".+	CIDR string `json:"cidr"`+}++// NetworkPolicyPort describes the port and protocol to match in a rule.+type NetworkPolicyPort struct {+	// The protocol (TCP, UDP, or SCTP) which traffic must match.+	// If not specified, this field defaults to TCP.+	// +optional+	Protocol *v1.Protocol `json:"protocol"`+	// The port on the given protocol. This can either be a numerical+	// or named port on a Pod. If this field is not provided, this+	// matches all port names and numbers.+	// TODO: extend it to include Port Range.+	// +optional+	Port *intstr.IntOrString `json:"port"`+}++// RuleAction describes the action to be applied on traffic matching a rule.+type RuleAction string++const (+	// RuleActionAllow describes that rule matching traffic must be allowed.

is there an LOG that indicating packets hitting the rule should be logged?

abhiraut

comment created time in 7 days

Pull request review commentvmware-tanzu/antrea

Add API types for Namespaced Antrea NetworkPolicy

+// Copyright 2020 Antrea Authors+//+// Licensed under the Apache License, Version 2.0 (the "License");+// you may not use this file except in compliance with the License.+// You may obtain a copy of the License at+//+//     http://www.apache.org/licenses/LICENSE-2.0+//+// Unless required by applicable law or agreed to in writing, software+// distributed under the License is distributed on an "AS IS" BASIS,+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+// See the License for the specific language governing permissions and+// limitations under the License.++package v1beta1++import (+	v1 "k8s.io/api/core/v1"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+	"k8s.io/apimachinery/pkg/util/intstr"+)++// +genclient+// +genclient:noStatus+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object++type AntreaNetworkPolicy struct {+	metav1.TypeMeta `json:",inline"`+	// Standard metadata of the object.+	metav1.ObjectMeta `json:"metadata,omitempty"`++	// Specification of the desired behavior of AntreaNetworkPolicy.+	Spec AntreaNetworkPolicySpec `json:"spec"`+}++// AntreaNetworkPolicySpec defines the desired state for AntreaNetworkPolicy.+type AntreaNetworkPolicySpec struct {

ditto.

abhiraut

comment created time in 10 days

Pull request review commentvmware-tanzu/antrea

Add API types for Namespaced Antrea NetworkPolicy

+// Copyright 2020 Antrea Authors+//+// Licensed under the Apache License, Version 2.0 (the "License");+// you may not use this file except in compliance with the License.+// You may obtain a copy of the License at+//+//     http://www.apache.org/licenses/LICENSE-2.0+//+// Unless required by applicable law or agreed to in writing, software+// distributed under the License is distributed on an "AS IS" BASIS,+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+// See the License for the specific language governing permissions and+// limitations under the License.++package v1beta1++import (+	v1 "k8s.io/api/core/v1"+	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"+	"k8s.io/apimachinery/pkg/util/intstr"+)++// +genclient+// +genclient:noStatus+// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object++type AntreaNetworkPolicy struct {

do we need Antrea prefix, it is already in antrea.vmware.tanzu.com group?

abhiraut

comment created time in 10 days

Pull request review commentvmware-tanzu/antrea

Add API types for Namespaced Antrea NetworkPolicy

+// Copyright 2020 Antrea Authors+//+// Licensed under the Apache License, Version 2.0 (the "License");+// you may not use this file except in compliance with the License.+// You may obtain a copy of the License at+//+//     http://www.apache.org/licenses/LICENSE-2.0+//+// Unless required by applicable law or agreed to in writing, software+// distributed under the License is distributed on an "AS IS" BASIS,+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.+// See the License for the specific language governing permissions and+// limitations under the License.++package v1beta1

should apigroup be v1alpha1.networking ?

abhiraut

comment created time in 10 days

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea
  1. yes, the whole paragraph explains what routed CNI topology is.
  2. yes, Antrea network PolicyOnly mode must has this type of topology (i.e aws, or aks engine with routed CNI configured)
suwang48404

comment created time in 24 days

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea
  1. AKS engine is not AKS.
  2. snippet of document, does that not explaining what is routed CNI?

The diagram on the left illustrates a routed CNI network topology such as AWS EKS. In this topology a Pod connects to the host network via a point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a host route with corresponding Pod's IP address as destination is created on each PtP device. Within each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod traffic, even within the same worker Node must traverse first to the host network and be routed by it.

suwang48404

comment created time in 24 days

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea

we had switch CNI before (azure). but today antrea supports only aws.

thx

suwang48404

comment created time in 24 days

pull request commentvmware-tanzu/antrea

Added design doc for policy-only mode.

/skip-all

suwang48404

comment created time in a month

pull request commentvmware-tanzu/antrea

Added design doc for policy-only mode.

@antoninbas @jianjuns can u please help approve and merge?

I cannot merge by myself.

suwang48404

comment created time in a month

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and be+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNI, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host+route to the Pod to local host gateway(``gw0``) interface from the PtP device. This is+illustrated by the diagram on the right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge had not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge had not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on their destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via ``gw0`` interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies the same principle to handle Pod-to-Service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it also supports Pod level+NetworkPolicy enforcement.++This means that a Pod-to-Service traffic flow needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. come back to OVS bridge for Pod Egress NetworkPolicy processing, and+1. go back to the host network yet again to be forwarded, if DNATed destination in 1) is an +inter-Node Pod or external network entity. ++We refer to the last traffic pattern as re-entrance traffic because in this pattern, a traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Denote+- VIP as cluster IP of a service+- SP_IP/DP_IP as respective client and server Pod IP+- VPort as service port of a service+- TPort as target port of server Pod+- SPort as original source port++The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   +-- Entering Host Network(via gw0):     SP_IP/SPort->VIP/VPort +-- After LB(DNAT):                     SP_IP/SPort->DP_IP/TPort+-- After Route(to gw0):                SP_IP/SPort->DP_IP/TPort++request/forwarding:+-- Entering Host Network(via gw0):     SP_IP/SPort->DP_IP/TPort+-- After route(to uplink):             SP_IP/SPort->DP_IP/TPort++reply:+-- Entering Host Network(via uplink):  DP_IP/TPort -> SP_IP/SPort+-- After LB(DNAT):                     VIP/VPort->SP_IP/Sport+-- After route(to gw0):                VIP/VPort->SP_IP/Sport+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases.(This differs from encap traffic where all traffic flows including post LB+service traffic share the same ``main`` route table.) Antrea creates a customized+``antrea_service`` route table, it is used in conjunction with ip-rule and ip-tables to handle+service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from ``gw0``.+1. At Antrea initialization, an ip-rule is added to select ``antrea_service`` route table as routing+table if traffic is marked in 1).+1. At Antrea initialization, a default route entry is added to ``antrea_service`` route table to+forward all traffic to ``gw0``.++The outcome may be something like this+```bash+ip neigh | grep gw0+169.254.253.1 dev gw0 lladdr 12:34:56:78:9a:bc PERMANENT++ip route show table 300 #tbl_idx=300 is antrea_service+default via 169.254.253.1 dev gw0 onlink ++ip rule | grep gw0+300:	from all fwmark 0x800/0x800 iif gw0 lookup 300 ++iptables -t mangle  -L ANTREA-MANGLE +Chain ANTREA-MANGLE (1 references)+target     prot opt source               destination         +MARK       all  --  anywhere             10.0.0.0/16          /* Antrea: mark service traffic */ MARK or 0x800+MARK       all  --  anywhere            !10.0.0.0/16          /* Antrea: unmark post LB service traffic */ MARK and 0x0+```++The above configuration allows Pod-to-Service traffic to use ``antrea_service`` route table after+load balancing, and to be steered back to OVS bridge for Pod NetworkPolicy processing.++#### Conntrack+Note also that with re-entrance traffic, a service request, after being load balanced and routed+back to OVS bridge via ``gw0``, has exactly the same 5-tuple as when it re-enters the host network+for forwarding.++When a service request with same 5-tuples re-enters the host network, it confuses Linux conntrack. +The Linux considers the re-entrance IP packet from a new connection flow that uses same source port+that has been allocated in the DNAT connection. In turn, the re-entrance packet triggers+another SNAT connection. The overall effect is that the service's DNAT connection is not+discovered by the service reply, and no Un-DNAT takes place. As a result, the reply is not+recognized, and therefore dropped by the source Pod.+   +Antrea uses the following mechanisms to handle Pod-to-Service traffic re-entrance to the host+network, and bypasses conntrack in host network.+1. In OVS bridge, adds flow that marks any re-entrance traffic with a special source MAC.+1. In OVS bridge, adds flow that causes any re-entrance traffic to bypasses conntrack in OVS zone.+1. In the host network' ip-tables, adds a rule in ``raw`` table that if matching the special+source MAC in 1), bypass conntrack in host zone.++#### NetworkPolicy Considerations+Note that when a traffic flow is re-entrance, the original reply packets do not make it into OVS,+it is un-DNATted in the host network before reaching OVS. This, however, does not have any impact on

done.

suwang48404

comment created time in a month

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and be+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNI, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host

done.

suwang48404

comment created time in a month

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI

done.

suwang48404

comment created time in a month

push eventsuwang48404/antrea

Jianjun Shen

commit sha 641cdc8a425f814ea62e38e65183723e97a1f032

Validate all controller and agent commands in antctl test (#558) Just to check all commands can be executed but do not check the results.

view details

Weiqiang Tang

commit sha 7f77e1670d62c1865706ae79814787858c92f099

Implement openflow learn action control interfaces (#594) - Implement openflow learn action control interfaces - Add related tests in integration test Signed-off-by: Weiqiang TANG <weiqiangt@vmware.com> Co-authored-by: Antonin Bas <antonin.bas@gmail.com> Co-authored-by: Quan Tian <qtian@vmware.com>

view details

Zhecheng Li

commit sha 808f6bf122790649114a1b2b7b07b7be25ad69bf

Decouple pkg querier from pkg monitor (#522) Created pkg agent/querier and controller/querier and moved methods in pkg monitor to them.

view details

Rui Cao

commit sha ab7eb6395938c70c890f83c44a45793c2cda3cc3

CI: Add timeout for kubectl rollout (#584) kubectl rollout may get stuck in failure if can not rollout successfully. This patch add 5 minutes timeout for the operation to make sure CI won't be blocked by it. Signed-off-by: RuiCao <rcao@vmware.com>

view details

Salvatore Orlando

commit sha 64cecb485e82e17de3eae749e07e87d12274ee02

Update community meeting schedule

view details

Antonin Bas

commit sha 78108bbbbca07aeb4a48545dcd2bcc83f0c24e74

Fix prepare-assets.sh script for antrea-octant.yml The wrong Docker image was used because the IMG_NAME environment variable was not set properly before generating the manifest. Fixes #607

view details

Wenying Dong

commit sha e070043ea69a56739ff7a4630afdabc22db313c6

Enhance bundle to support both group and flow modifications (#559)

view details

Jianjun Shen

commit sha ad0801e04c5b463a0c95ae1207019d361c72eb2d

Fix antctl command names in troubleshooting.md (#611) "antctl get network-policy" shold be "antctl get networkpolicy"

view details

Zhang Yiwei

commit sha eeba034041095a1faa23b68dbe87c0559c0dea08

add namespace,name fields & filters for antctl get netpol (#576)

view details

Quan Tian

commit sha 2fc895867f5975a08745020a2919646a7882b390

Support configurable apiserver ports (#600) * Use unpopular ports as antrea components' default ports * Support overriding the ports via config files * Support overriding the server address via "-s" for antctl

view details

Jianjun Shen

commit sha 238a31429cc04ded150e97486e82f226f041c7c1

Add agent API and antctl command to dump OVS flows (#602) Add agent API with path /ovsflows to return OVS flows. Add antctl command to dump OVS flows retrieved from /ovsflows. For now we support only dumping all flows and dumping flows of a specified Pod. Later we will support dumping flows of a NetworkPolicy, a remote Node, a Service, etc. In the current implementation, agent dumps OVS flows by executing "ovs-ofctl dump-flows". In future we should extend openflow.Flow to support converting a flow to a readable string directly, and we should check or track whether a flow is realized in OVS using an more efficient way. Then we no more need to use ovs-ofctl to dump flows.

view details

Quan Tian

commit sha b126b528d9fadd66713ae503992c86dbc9938ec9

Fix wrong error message when positional arguments are provided (#618) The error message was opposite to the fact.

view details

Weiqiang Tang

commit sha e1d56aaa51ca4400e3bc4d22e328ca3c4765cdb2

Add missing yaml output annotations for antctl (#621) - Add yaml annotations for agent-info and controller-info Signed-off-by: Weiqiang TANG <weiqiangt@vmware.com>

view details

Zhecheng Li

commit sha e6a218d71a63e3f640b7ac186f13a65afaa041de

Prune unused images (#601) Remove old images in CI. * old job images will be deleted by new job.

view details

Antonin Bas

commit sha 88cd9ac19c23d533684a2216f238a8a2df68a500

Free disk space in some workflows with "apt-get clean" A recent update to the Github VM images has caused our Kind tests to fail because of a pretty significant reduction in the amount of disk space available. Many users have reported the issue to Github: https://github.com/actions/virtual-environments/issues/709 The recommended workaround is to run "apt-get clean" which should free up a significant amount of space. In our case, most of the disk usage comes from docker images and containers, and we don't have an easy way to use the /dev/sdb1 partition (guaranteed 14GB). According to the Github team, they have started a rollback to the old images, but it makes sense for us to merge this workaround anyway, to avoid disruptions in the immediate future and to potentially have more disk space available, should we need it later. Fixes #635

view details

Kobi Samoray

commit sha 2ac48ff201fa955c404962ac81a1987aeb0ebc35

Apply authentication to agent apiserver endpoint (#622) Antrea agent uses apiserver to interact with antctl. However, unlike the controller, the API endpoint is exposed only locally which limits antctl to local execution. Additionally, we would like to reuse the listener for Prometheus metrics, which would require external access to the API endpoint's metrics path. Having authentication in a similar way to the controller's implementation could help here as it will allow external exposure of the API endpoint.

view details

Zhecheng Li

commit sha 9fe21b03104000b12ddef83ad770cace549104f0

Timeout for sonobuoy jobs (#643) Set timeout of sonobuoy jobs (conformance, networkpolicy) to 1800s.

view details

Zhang Yiwei

commit sha 48fe16647fd703b8ec88c0eba6b7b2a07c4e2817

add clear error message for antctl (#624)

view details

Yuki Tsuboi

commit sha 8625b6a0acd1a48eb4c597e70d9719cf70b01fbd

Compliance to k8s log format in antrea-agent pod (#629) logrus is used in some packages of Antrea-agent then it used different format of logging. This introduces log formatting for logrus in compliance with k8s log. Signed-off-by: Yuki Tsuboi <ytsuboi@vmware.com>

view details

Quan Tian

commit sha d8bbb7ff887fc4a33659fa3132e3485d3832fc2f

Acquire xtables.lock before executing iptables-restore (#633) We need to acquire xtables lock explicitly for iptables-restore to prevent it from conflicting with iptables/iptables-restore which might being called by kube-proxy. iptables supports "--wait" option and go-iptables has enabled it. iptables-restore doesn't support the option until 1.6.2, but it's not widely deployed yet. Besides, this PR logs the error instead of the command when executing iptables-restore fails.

view details

push time in a month

pull request commentvmware-tanzu/antrea

Added design doc for policy-only mode.

@jianjuns @suwang48404 I didn't realize this was still open. Anything preventing it from getting merged?

My apologies, let me try to address @jianjuns comments quickly.

suwang48404

comment created time in a month

pull request commentvmware-tanzu/antrea

Added design doc for policy-only mode.

/skip-all

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNI, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host+route to the Pod to local host gateway(``gw0``) interface from the PtP device. This is+illustrated by the diagram on the right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge had not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge had not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-Service traffic in all traffic modes where

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNI, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host+route to the Pod to local host gateway(``gw0``) interface from the PtP device. This is+illustrated by the diagram on the right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge had not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge had not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-Service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it also supports Pod level+NetworkPolicy enforcement.++This means that a Pod-to-Service traffic flow needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. come back to OVS bridge for Pod Egress NetworkPolicy processing, and+1. go back to the host network yet again to be forwarded, if DNATed destination in 1) is an +inter-Node Pod or external network entity. ++We refer to the last traffic pattern as re-entrance traffic because in this pattern, a traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   +-- Entering Host Network(via gw0):     SP_IP/SPort->VIP/VPort +-- After LB(DNAT):                     SP_IP/SPort->DP_IP/TPort+-- After Route(to gw0):                SP_IP/SPort->DP_IP/TPort++request/forwrding: 

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNI, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host+route to the Pod to local host gateway(``gw0``) interface from the PtP device. This is+illustrated by the diagram on the right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge had not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge had not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-Service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it also supports Pod level+NetworkPolicy enforcement.++This means that a Pod-to-Service traffic flow needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. come back to OVS bridge for Pod Egress NetworkPolicy processing, and+1. go back to the host network yet again to be forwarded, if DNATed destination in 1) is an +inter-Node Pod or external network entity. ++We refer to the last traffic pattern as re-entrance traffic because in this pattern, a traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   +-- Entering Host Network(via gw0):     SP_IP/SPort->VIP/VPort +-- After LB(DNAT):                     SP_IP/SPort->DP_IP/TPort+-- After Route(to gw0):                SP_IP/SPort->DP_IP/TPort++request/forwrding: +-- Entering Host Network(via gw0):     SP_IP/SPort->DP_IP/TPort+-- After route(to uplink):             SP_IP/SPort->DP_IP/TPort++reply:+-- Entering Host Network(via uplink):  DP_IP/TPort -> SP_IP/SPort+-- After LB(DNAT):                     VIP/VPort->SP_IP/Sport+-- After route(to gw0):                VIP/VPort->SP_IP/Sport+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases.(This differs from encap traffic where all traffic flows including post LB+service traffic share the same ``main`` route table.) Antrea creates a customized+``antrea_service`` route table, it is used in+conjunction with ip-rule and ip-tables to handle service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from gw0. 

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNI, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host+route to the Pod to local host gateway(``gw0``) interface from the PtP device. This is+illustrated by the diagram on the right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge had not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge had not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-Service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it also supports Pod level+NetworkPolicy enforcement.++This means that a Pod-to-Service traffic flow needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. come back to OVS bridge for Pod Egress NetworkPolicy processing, and+1. go back to the host network yet again to be forwarded, if DNATed destination in 1) is an +inter-Node Pod or external network entity. ++We refer to the last traffic pattern as re-entrance traffic because in this pattern, a traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   +-- Entering Host Network(via gw0):     SP_IP/SPort->VIP/VPort +-- After LB(DNAT):                     SP_IP/SPort->DP_IP/TPort+-- After Route(to gw0):                SP_IP/SPort->DP_IP/TPort++request/forwrding: +-- Entering Host Network(via gw0):     SP_IP/SPort->DP_IP/TPort+-- After route(to uplink):             SP_IP/SPort->DP_IP/TPort++reply:+-- Entering Host Network(via uplink):  DP_IP/TPort -> SP_IP/SPort+-- After LB(DNAT):                     VIP/VPort->SP_IP/Sport+-- After route(to gw0):                VIP/VPort->SP_IP/Sport+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases.(This differs from encap traffic where all traffic flows including post LB+service traffic share the same ``main`` route table.) Antrea creates a customized+``antrea_service`` route table, it is used in

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNI, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host+route to the Pod to local host gateway(``gw0``) interface from the PtP device. This is+illustrated by the diagram on the right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge had not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge had not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-Service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it also supports Pod level+NetworkPolicy enforcement.++This means that a Pod-to-Service traffic flow needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. come back to OVS bridge for Pod Egress NetworkPolicy processing, and+1. go back to the host network yet again to be forwarded, if DNATed destination in 1) is an +inter-Node Pod or external network entity. ++We refer to the last traffic pattern as re-entrance traffic because in this pattern, a traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   +-- Entering Host Network(via gw0):     SP_IP/SPort->VIP/VPort +-- After LB(DNAT):                     SP_IP/SPort->DP_IP/TPort+-- After Route(to gw0):                SP_IP/SPort->DP_IP/TPort++request/forwrding: +-- Entering Host Network(via gw0):     SP_IP/SPort->DP_IP/TPort+-- After route(to uplink):             SP_IP/SPort->DP_IP/TPort++reply:+-- Entering Host Network(via uplink):  DP_IP/TPort -> SP_IP/SPort+-- After LB(DNAT):                     VIP/VPort->SP_IP/Sport+-- After route(to gw0):                VIP/VPort->SP_IP/Sport+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases.(This differs from encap traffic where all traffic flows including post LB+service traffic share the same ``main`` route table.) Antrea creates a customized+``antrea_service`` route table, it is used in+conjunction with ip-rule and ip-tables to handle service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from gw0. +1. At Antrea initialization, an ip-rule is added to select ``antrea_service`` route table as routing+table if traffic is marked in 1).+1. At Antrea initialization, a default route entry is added to ``antrea_service`` route table to+forward all traffic to ``gw0``.++The outcome may be something like this+```bash+ip neigh | grep gw0+169.254.253.1 dev gw0 lladdr 12:34:56:78:9a:bc PERMANENT++ip route show table 300 #tbl_idx=300 is antrea_service+default via 169.254.253.1 dev gw0 onlink ++ip rule | grep gw0+300:	from all fwmark 0x800/0x800 iif gw0 lookup 300 ++iptables -t mangle  -L ANTREA-MANGLE +Chain ANTREA-MANGLE (1 references)+target     prot opt source               destination         +MARK       all  --  anywhere             10.0.0.0/16          /* Antrea: mark service traffic */ MARK or 0x800+MARK       all  --  anywhere            !10.0.0.0/16          /* Antrea: unmark post LB service traffic */ MARK and 0x0+```++The above configuration allows Pod-to-Service traffic to use ``antrea_service`` route table after+load balancing, and to be steered back to OVS bridge for Pod NetworkPolicy processing.++#### Conntrack+Note also that with re-entrance traffic, a service request, after being load balanced and routed+back to OVS bridge via ``gw0``, has exactly the same 5-tuple as when it re-enters the host network+for forwarding.++When a service request with same 5-tuples re-enters the host network, it confuses Linux conntrack. +The Linux considers the re-entrance IP packet from a new connection flow that uses same source port+that has been allocated in the DNAT connection. In turn, the re-entrance packet triggers+another SNAT connection. The overall effect is that the service's DNAT connection is not+discovered by the service reply, and no Un-DNAT takes place. As a result, the reply is not+recognized, and therefore dropped by the source Pod.+   +Antrea uses the following mechanisms to handle Pod-to-Service traffic re-entrance to the host+network, and bypasses conntrack in host network.+1. In OVS bridge, adds flow that marks any re-entrance traffic with a special source MAC.+1. In OVS bridge, adds flow that causes any re-entrance traffic to bypasses conntrack in OVS zone.+1. In the host network' ip-tables, adds a rule in ``raw`` table that if matching the special+source MAC in 1), bypass conntrack in host zone.++#### NetworkPolicy Considerations+Note that when a traffic flow is re-entrance, the original reply packets do not make into OVS,+it is un-DNATted in the host network before reaching OVS. This, however, does not have any impact on+NetworkPolicy enforcement.+ +Antrea enforces NetworkPolicy by allowing or disallowing initial connection packets (e.g. TCP+ sync) to go through and to establish connection. Once a connection is

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNI, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host+route to the Pod to local host gateway(``gw0``) interface from the PtP device. This is+illustrated by the diagram on the right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge had not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge had not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-Service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it also supports Pod level+NetworkPolicy enforcement.++This means that a Pod-to-Service traffic flow needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. come back to OVS bridge for Pod Egress NetworkPolicy processing, and+1. go back to the host network yet again to be forwarded, if DNATed destination in 1) is an +inter-Node Pod or external network entity. ++We refer to the last traffic pattern as re-entrance traffic because in this pattern, a traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   +-- Entering Host Network(via gw0):     SP_IP/SPort->VIP/VPort +-- After LB(DNAT):                     SP_IP/SPort->DP_IP/TPort+-- After Route(to gw0):                SP_IP/SPort->DP_IP/TPort++request/forwrding: +-- Entering Host Network(via gw0):     SP_IP/SPort->DP_IP/TPort+-- After route(to uplink):             SP_IP/SPort->DP_IP/TPort++reply:+-- Entering Host Network(via uplink):  DP_IP/TPort -> SP_IP/SPort+-- After LB(DNAT):                     VIP/VPort->SP_IP/Sport+-- After route(to gw0):                VIP/VPort->SP_IP/Sport+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases.(This differs from encap traffic where all traffic flows including post LB+service traffic share the same ``main`` route table.) Antrea creates a customized+``antrea_service`` route table, it is used in+conjunction with ip-rule and ip-tables to handle service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from gw0. +1. At Antrea initialization, an ip-rule is added to select ``antrea_service`` route table as routing+table if traffic is marked in 1).+1. At Antrea initialization, a default route entry is added to ``antrea_service`` route table to+forward all traffic to ``gw0``.++The outcome may be something like this+```bash+ip neigh | grep gw0+169.254.253.1 dev gw0 lladdr 12:34:56:78:9a:bc PERMANENT++ip route show table 300 #tbl_idx=300 is antrea_service+default via 169.254.253.1 dev gw0 onlink ++ip rule | grep gw0+300:	from all fwmark 0x800/0x800 iif gw0 lookup 300 ++iptables -t mangle  -L ANTREA-MANGLE +Chain ANTREA-MANGLE (1 references)+target     prot opt source               destination         +MARK       all  --  anywhere             10.0.0.0/16          /* Antrea: mark service traffic */ MARK or 0x800+MARK       all  --  anywhere            !10.0.0.0/16          /* Antrea: unmark post LB service traffic */ MARK and 0x0+```++The above configuration allows Pod-to-Service traffic to use ``antrea_service`` route table after+load balancing, and to be steered back to OVS bridge for Pod NetworkPolicy processing.++#### Conntrack+Note also that with re-entrance traffic, a service request, after being load balanced and routed+back to OVS bridge via ``gw0``, has exactly the same 5-tuple as when it re-enters the host network+for forwarding.++When a service request with same 5-tuples re-enters the host network, it confuses Linux conntrack. +The Linux considers the re-entrance IP packet from a new connection flow that uses same source port+that has been allocated in the DNAT connection. In turn, the re-entrance packet triggers+another SNAT connection. The overall effect is that the service's DNAT connection is not+discovered by the service reply, and no Un-DNAT takes place. As a result, the reply is not+recognized, and therefore dropped by the source Pod.+   +Antrea uses the following mechanisms to handle Pod-to-Service traffic re-entrance to the host+network, and bypasses conntrack in host network.+1. In OVS bridge, adds flow that marks any re-entrance traffic with a special source MAC.+1. In OVS bridge, adds flow that causes any re-entrance traffic to bypasses conntrack in OVS zone.+1. In the host network' ip-tables, adds a rule in ``raw`` table that if matching the special+source MAC in 1), bypass conntrack in host zone.++#### NetworkPolicy Considerations+Note that when a traffic flow is re-entrance, the original reply packets do not make into OVS,

done.

suwang48404

comment created time in 2 months

push eventsuwang48404/antrea

Su Wang

commit sha d413b3cf21fe6a0fb4d9480a3673d3313b07de47

Added design doc for policy-only mode.

view details

push time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNI, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host+route to the Pod to local host gateway(``gw0``) interface from the PtP device. This is+illustrated by the diagram on the right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge had not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge had not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being

done

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. comes back to OVS bridge for Pod NetworkPolicy processing, and+1. if DNATed destination in 1) is an inter-Node Pod or external network entity, and goes back+to the host network yet again to be forwarded.++We refer the last traffic pattern as re-entrance traffic because in this pattern, an traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like

done.

suwang48404

comment created time in 2 months

push eventsuwang48404/antrea

Su Wang

commit sha 5cc97862f89881c6f60fecea7a85a8fe133255bc

Added design doc for policy-only mode.

view details

push time in 2 months

push eventsuwang48404/antrea

Su Wang

commit sha ff23377e5976d8dc28182ca128b7c9646683467e

Added design doc for policy-only mode.

view details

push time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNI, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host+route to the Pod to local host gateway(``gw0``) interface from the PtP device. This is

No, I think they are say, just habit.

suwang48404

comment created time in 2 months

pull request commentvmware-tanzu/antrea

Added design doc for policy-only mode.

/skip-all

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNI, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host+route to the Pod to local host gateway(``gw0``) interface from the PtP device. This is+illustrated by the diagram on the right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge had not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge had not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-Service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it also supports Pod level+NetworkPolicy enforcement.++This means that a Pod-to-Service traffic flow needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. come back to OVS bridge for Pod Egress NetworkPolicy processing, and+1. go back to the host network yet again to be forwarded, if DNATed destination in 1) is an +inter-Node Pod or external network entity. ++We refer to the last traffic pattern as re-entrance traffic because in this pattern, a traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   +-- Entering Host Network(via gw0):     SP_IP/SPort->VIP/VPort +-- After LB(DNAT):                     SP_IP/SPort->DP_IP/TPort+-- After Route(to gw0):                SP_IP/SPort->DP_IP/TPort++request/forwrding: +-- Entering Host Network(via gw0):     SP_IP/SPort->DP_IP/TPort+-- After route(to uplink):             SP_IP/SPort->DP_IP/TPort++reply:+-- Entering Host Network(via uplink):  DP_IP/TPort -> SP_IP/SPort+-- After LB(DNAT):                     VIP/VPort->SP_IP/Sport+-- After route(to gw0):                VIP/VPort->SP_IP/Sport+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases.(This differs from encap traffic where all traffic flows including post LB+service traffic share the same ``main`` route table.) Antrea creates a customized+``antrea_service`` route table, it is used in+conjunction with ip-rule and ip-tables to handle service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from gw0. +1. At Antrea initialization, an ip-rule is added to select ``antrea_service`` route table as routing+table if traffic is marked in 1).+1. At Antrea initialization, a default route entry is added to ``antrea_service`` route table to+forward all traffic to ``gw0``.++The outcome may be something like this+```bash+ip neigh | grep gw0+169.254.253.1 dev gw0 lladdr 12:34:56:78:9a:bc PERMANENT++ip route show table 300 #tbl_idx=300 is antrea_service+default via 169.254.253.1 dev gw0 onlink ++ip rule | grep gw0+300:	from all fwmark 0x800/0x800 iif gw0 lookup 300 ++iptables -t mangle  -L ANTREA-MANGLE +Chain ANTREA-MANGLE (1 references)+target     prot opt source               destination         +MARK       all  --  anywhere             10.0.0.0/16          /* Antrea: mark service traffic */ MARK or 0x800+MARK       all  --  anywhere            !10.0.0.0/16          /* Antrea: unmark post LB service traffic */ MARK and 0x0+```++The above configuration allows Pod-to-Service traffic to use ``antrea_service`` route table after+load balancing, and to be steered back to OVS bridge for Pod NetworkPolicy processing.++#### Conntrack+Note also that with re-entrance traffic, a service request, after being load balanced and routed+back to OVS bridge via ``gw0``, has exactly the same 5-tuple as when it re-enters the host network+for forwarding.++When a service request with same 5-tuples re-enters the host network, it confuses Linux conntrack. +The Linux considers the re-entrance IP packet from a new connection flow that uses same source port+that has been allocated in the DNAT connection. In turn, the re-entrance packet triggers+another SNAT connection. The overall effect is that the service's DNAT connection is not+discovered by the service reply, and no Un-DNAT takes place. As a result, the reply is not+recognized, and therefore dropped by the source Pod.+   +Antrea uses the following mechanisms to handle Pod-to-Service traffic re-entrance to the host+network, and bypasses conntrack in host network.+1. In OVS bridge, adds flow that marks any re-entrance traffic with a special source MAC.+1. In OVS bridge, adds flow that causes any re-entrance traffic to bypasses conntrack in OVS zone.+1. In the host network' ip-tables, adds a rule in ``raw`` table that if matching the special+source MAC in 1), bypass conntrack in host zone.++#### NetworkPolicy Considerations+Note that when a traffic flow is re-entrance, the original reply packets do not make into OVS,+it is un-DNATted in the host network before reaching OVS. This, however, does not have any impact on+NetworkPolicy enforcement.+ +Antrea enforces NetworkPolicy by allowing or disallowing initial connection packets (e.g. TCP+ sync) to go through and to establish connection. Once a connection is+established, Antrea relies on conntrack to admit or reject packets for that connection. This still +holds true for re-entrance traffic flows, except that conntrack takes place not within OVS conntrack+zone, but instead is in the host network's default conntrack zone. Hence NetworkPolicy+enforcement is not impacted. ++It has some effects on statistics collection. If original reply traffic reaches OVS bridge as is+the case of encap traffic flows, the OVS bridge knows about any reply packets dropped by OVS zone+conntrack, and can record them accordingly. With re-entrance traffic, the reply traffic with+original server Pod IPs does not reach OVS bridge, and any dropped traffic by host network+conntrack is unknown to the OVS bridge.++## Additional Works+1. Smoother transition in/out of Antrea in policy mode, Kubernetes deployment shall be easily+scaled up and down after/before Antrea insertion to allow Pods be added added to Antrea after 

done

suwang48404

comment created time in 2 months

push eventsuwang48404/antrea

Su Wang

commit sha df4e8d699d68f99d9cbd91c5941a813eb1b6152b

Added design doc for policy-only mode.

view details

push time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNIi, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host+route to the Pod to local host gateway(``gw0``) interface from the PtP device. This is+illustrated by the diagram on the right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge had not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge had not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-Service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it also supports Pod level+NetworkPolicy enforcement.++This means that a Pod-to-Service traffic flow needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. come back to OVS bridge for Pod Egress NetworkPolicy processing, and+1. go back to the host network yet again to be forwarded, if DNATed destination in 1) is an +inter-Node Pod or external network entity. ++We refer to the last traffic pattern as re-entrance traffic because in this pattern, a traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   +-- Entering Host Network(via gw0):     SP_IP/SPort->VIP/VPort +-- After LB(DNAT):                     SP_IP/SPort->DP_IP/TPort+-- After Route(to gw0):                SP_IP/SPort->DP_IP/TPort++request/forwrding: +-- Entering Host Network(via gw0):     SP_IP/SPort->DP_IP/TPort+-- After route(to uplink):             SP_IP/SPort->DP_IP/TPort++reply:+-- Entering Host Network(via uplink):  DP_IP/TPort -> SP_IP/SPort+-- After LB(DNAT):                     VIP/VPort->SP_IP/Sport+-- After route(to gw0):                VIP/VPort->SP_IP/Sport+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases.(This differs from encap traffic where all traffic flows including post LB+service traffic share the same ``main`` route table.) Antrea creates a customized+``antrea_service`` route table, it is used in+conjunction with ip-rule and ip-tables to handle service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from gw0. +1. At Antrea initialization, an ip-rule is added to select ``antrea_service`` route table as routing+table if traffic is marked in 1).+1. At Antrea initialization, a default route entry is added to ``antrea_service`` route table to+forward all traffic to ``gw0``.++The outcome may be something like this+```bash+ip neigh | grep gw0+169.254.253.1 dev gw0 lladdr 12:34:56:78:9a:bc PERMANENT++ip route show table 300 #tbl_idx=300 is antrea_service+default via 169.254.253.1 dev gw0 onlink ++ip rule | grep gw0+300:	from all fwmark 0x800/0x800 iif gw0 lookup 300 ++iptables -t mangle  -L ANTREA-MANGLE +Chain ANTREA-MANGLE (1 references)+target     prot opt source               destination         +MARK       all  --  anywhere             10.0.0.0/16          /* Antrea: mark service traffic */ MARK or 0x800+MARK       all  --  anywhere            !10.0.0.0/16          /* Antrea: unmark post LB service traffic */ MARK and 0x0+```++The above configuration allows Pod-to-Service traffic to use ``antrea_service`` route table after+load balancing, and to be steered back to OVS bridge for Pod NetworkPolicy processing.++#### Conntrack+Note also that with re-entrance traffic a service request, after being load balanced and routed+back to OVS bridge via ``gw0``, has exactly the same 5-tuple as when it re-enters the host network+for forwarding.++When a service request with same 5-tuples re-enters the host network, it confuses Linux conntrack. +The Linux considers the re-entrance IP packet from a new connection flow that uses same source port+that has been allocated in the DNAT connection. In turn, the re-entrance packet triggers+another SNAT connection. The overall effect is that the service's DNAT connection is not+discovered by the service reply, and no Un-DNAT takes place. As the result, the reply is not+recognized, and therefore dropped by the source Pod.+   +Antrea uses the following mechanisms to handle Pod-to-Service traffic re-entrance to the host+network, and bypasses conntrack in host network.+1. In OVS bridge, adds flow that marks any re-entrance traffic with a special source MAC.+1. In OVS bridge, adds flow that causes any re-entrance traffic to bypasses conntrack in OVS zone.+1. In the host network' ip-tables, adds a rule in ``raw`` table that if matching the special+source MAC in 1), bypass conntrack in host zone.++#### NetworkPolicy Considerations+Note that when a traffic flow is re-entrance, the original reply packets do not make into OVS,+it is un-DNATted in the host network before reaching OVS. Does it have any impact on NetworkPolicy+enforcement? 

done

suwang48404

comment created time in 2 months

pull request commentvmware-tanzu/antrea

Added design doc for policy-only mode.

@jianjuns @antoninbas @abhiraut @tnqn

if comments are addressed, please help approve and merge thx, su

suwang48404

comment created time in 2 months

pull request commentvmware-tanzu/antrea

Added design doc for policy-only mode.

/test-all

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNIi, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host+route to the Pod to local host gateway(``gw0``) interface from the PtP device. This is+illustrated by the diagram on the right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge had not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge had not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-Service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it also supports Pod level+NetworkPolicy enforcement.++This means that a Pod-to-Service traffic flow needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. come back to OVS bridge for Pod Egress NetworkPolicy processing, and+1. go back to the host network yet again to be forwarded, if DNATed destination in 1) is an +inter-Node Pod or external network entity. ++We refer to the last traffic pattern as re-entrance traffic because in this pattern, a traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   +-- Entering Host Network(via gw0):     SP_IP/SPort->VIP/VPort +-- After LB(DNAT):                     SP_IP/SPort->DP_IP/TPort+-- After Route(to gw0):                SP_IP/SPort->DP_IP/TPort++request/forwrding: +-- Entering Host Network(via gw0):     SP_IP/SPort->DP_IP/TPort+-- After route(to uplink):             SP_IP/SPort->DP_IP/TPort++reply:+-- Entering Host Network(via uplink):  DP_IP/TPort -> SP_IP/SPort+-- After LB(DNAT):                     VIP/VPort->SP_IP/Sport+-- After route(to gw0):                VIP/VPort->SP_IP/Sport+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases.(This differs from encap traffic where all traffic flows including post LB+service traffic share the same ``main`` route table.) Antrea creates a customized+``antrea_service`` route table, it is used in+conjunction with ip-rule and ip-tables to handle service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from gw0. +1. At Antrea initialization, an ip-rule is added to select ``antrea_service`` route table as routing+table if traffic is marked in 1).+1. At Antrea initialization, a default route entry is added to ``antrea_service`` route table to+forward all traffic to ``gw0``.++The outcome may be something like this+```bash+ip neigh | grep gw0+169.254.253.1 dev gw0 lladdr 12:34:56:78:9a:bc PERMANENT++ip route show table 300 #tbl_idx=300 is antrea_service+default via 169.254.253.1 dev gw0 onlink ++ip rule | grep gw0+300:	from all fwmark 0x800/0x800 iif gw0 lookup 300 ++iptables -t mangle  -L ANTREA-MANGLE +Chain ANTREA-MANGLE (1 references)+target     prot opt source               destination         +MARK       all  --  anywhere             10.0.0.0/16          /* Antrea: mark service traffic */ MARK or 0x800+MARK       all  --  anywhere            !10.0.0.0/16          /* Antrea: unmark post LB service traffic */ MARK and 0x0+```++The above configuration allows Pod-to-Service traffic to use ``antrea_service`` route table after+load balancing, and to be steered back to OVS bridge for Pod NetworkPolicy processing.++#### Conntrack+Note also that with re-entrance traffic a service request, after being load balanced and routed+back to OVS bridge via ``gw0``, has exactly the same 5-tuple as when it re-enters the host network+for forwarding.++When a service request with same 5-tuples re-enters the host network, it confuses Linux conntrack. +The Linux considers the re-entrance IP packet from a new connection flow that uses same source port+that has been allocated in the DNAT connection. In turn, the re-entrance packet triggers+another SNAT connection. The overall effect is that the service's DNAT connection is not+discovered by the service reply, and no Un-DNAT takes place. As the result, the reply is not

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNIi, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host+route to the Pod to local host gateway(``gw0``) interface from the PtP device. This is+illustrated by the diagram on the right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge had not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge had not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-Service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it also supports Pod level+NetworkPolicy enforcement.++This means that a Pod-to-Service traffic flow needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. come back to OVS bridge for Pod Egress NetworkPolicy processing, and+1. go back to the host network yet again to be forwarded, if DNATed destination in 1) is an +inter-Node Pod or external network entity. ++We refer to the last traffic pattern as re-entrance traffic because in this pattern, a traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   +-- Entering Host Network(via gw0):     SP_IP/SPort->VIP/VPort +-- After LB(DNAT):                     SP_IP/SPort->DP_IP/TPort+-- After Route(to gw0):                SP_IP/SPort->DP_IP/TPort++request/forwrding: +-- Entering Host Network(via gw0):     SP_IP/SPort->DP_IP/TPort+-- After route(to uplink):             SP_IP/SPort->DP_IP/TPort++reply:+-- Entering Host Network(via uplink):  DP_IP/TPort -> SP_IP/SPort+-- After LB(DNAT):                     VIP/VPort->SP_IP/Sport+-- After route(to gw0):                VIP/VPort->SP_IP/Sport+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases.(This differs from encap traffic where all traffic flows including post LB+service traffic share the same ``main`` route table.) Antrea creates a customized+``antrea_service`` route table, it is used in+conjunction with ip-rule and ip-tables to handle service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from gw0. +1. At Antrea initialization, an ip-rule is added to select ``antrea_service`` route table as routing+table if traffic is marked in 1).+1. At Antrea initialization, a default route entry is added to ``antrea_service`` route table to+forward all traffic to ``gw0``.++The outcome may be something like this+```bash+ip neigh | grep gw0+169.254.253.1 dev gw0 lladdr 12:34:56:78:9a:bc PERMANENT++ip route show table 300 #tbl_idx=300 is antrea_service+default via 169.254.253.1 dev gw0 onlink ++ip rule | grep gw0+300:	from all fwmark 0x800/0x800 iif gw0 lookup 300 ++iptables -t mangle  -L ANTREA-MANGLE +Chain ANTREA-MANGLE (1 references)+target     prot opt source               destination         +MARK       all  --  anywhere             10.0.0.0/16          /* Antrea: mark service traffic */ MARK or 0x800+MARK       all  --  anywhere            !10.0.0.0/16          /* Antrea: unmark post LB service traffic */ MARK and 0x0+```++The above configuration allows Pod-to-Service traffic to use ``antrea_service`` route table after+load balancing, and to be steered back to OVS bridge for Pod NetworkPolicy processing.++#### Conntrack+Note also that with re-entrance traffic a service request, after being load balanced and routed+back to OVS bridge via ``gw0``, has exactly the same 5-tuple as when it re-enters the host network+for forwarding.

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNIi, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host+route to the Pod to local host gateway(``gw0``) interface from the PtP device. This is+illustrated by the diagram on the right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge had not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge had not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-Service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it also supports Pod level+NetworkPolicy enforcement.++This means that a Pod-to-Service traffic flow needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. come back to OVS bridge for Pod Egress NetworkPolicy processing, and+1. go back to the host network yet again to be forwarded, if DNATed destination in 1) is an +inter-Node Pod or external network entity. ++We refer to the last traffic pattern as re-entrance traffic because in this pattern, a traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   +-- Entering Host Network(via gw0):     SP_IP/SPort->VIP/VPort +-- After LB(DNAT):                     SP_IP/SPort->DP_IP/TPort+-- After Route(to gw0):                SP_IP/SPort->DP_IP/TPort++request/forwrding: +-- Entering Host Network(via gw0):     SP_IP/SPort->DP_IP/TPort+-- After route(to uplink):             SP_IP/SPort->DP_IP/TPort++reply:+-- Entering Host Network(via uplink):  DP_IP/TPort -> SP_IP/SPort+-- After LB(DNAT):                     VIP/VPort->SP_IP/Sport+-- After route(to gw0):                VIP/VPort->SP_IP/Sport+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases.(This differs from encap traffic where all traffic flows including post LB+service traffic share the same ``main`` route table.) Antrea creates a customized+``antrea_service`` route table, it is used in+conjunction with ip-rule and ip-tables to handle service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from gw0. +1. At Antrea initialization, an ip-rule is added to select ``antrea_service`` route table as routing+table if traffic is marked in 1).+1. At Antrea initialization, a default route entry is added to ``antrea_service`` route table to+forward all traffic to ``gw0``.++The outcome may be something like this+```bash+ip neigh | grep gw0+169.254.253.1 dev gw0 lladdr 12:34:56:78:9a:bc PERMANENT++ip route show table 300 #tbl_idx=300 is antrea_service+default via 169.254.253.1 dev gw0 onlink ++ip rule | grep gw0+300:	from all fwmark 0x800/0x800 iif gw0 lookup 300 ++iptables -t mangle  -L ANTREA-MANGLE +Chain ANTREA-MANGLE (1 references)+target     prot opt source               destination         +MARK       all  --  anywhere             10.0.0.0/16          /* Antrea: mark service traffic */ MARK or 0x800+MARK       all  --  anywhere            !10.0.0.0/16          /* Antrea: unmark post LB service traffic */ MARK and 0x0+```++The above configuration allows Pod-to-Service traffic to use ``antrea_service`` route table after+load balancing, and to be steered back to OVS bridge for Pod NetworkPolicy processing.++#### Conntrack+Note also that with re-entrance traffic a service request, after being load balanced and routed+back to OVS bridge via ``gw0``, has exactly the same 5-tuple as when it re-enters the host network+for forwarding.++When a service request with same 5-tuples re-enters the host network, it confuses Linux conntrack. +The Linux considers the re-entrance IP packet from a new connection flow that uses same source port+that has been allocated in the DNAT connection. In turn, the re-entrance packet triggers+another SNAT connection. The overall effect is that the service's DNAT connection is not+discovered by the service reply, and no Un-DNAT takes place. As the result, the reply is not+recognized, and therefore dropped by the source Pod.+   +Antrea uses the following mechanisms to handle Pod-to-Service traffic re-entrance to the host+network, and bypasses conntrack in host network.+1. In OVS bridge, adds flow that marks any re-entrance traffic with a special source MAC.+1. In OVS bridge, adds flow that causes any re-entrance traffic to bypasses conntrack in OVS zone.+1. In the host network' ip-tables, adds a rule in ``raw`` table that if matching the special+source MAC in 1), bypass conntrack in host zone.++#### NetworkPolicy Considerations+Note that when a traffic flow is re-entrance, the original reply packets do not make into OVS,+it is un-DNATted in the host network before reaching OVS. Does it have any impact on NetworkPolicy+enforcement? + +Antrea enforces NetworkPolicy by allowing or disallowing initial connection packets (e.g. TCP+ sync) to go through and to establish connection. Once a connection is+established, Antrea relies on conntrack to admit or reject packets for that connection. This still +holds true for re-entrance traffic flows, except that conntrack takes place not within OVS conntrack+zone, but instead is in the host network's default conntrack zone. Hence NetworkPolicy+enforcement is not impacted. ++It has some effects on statistics collection. If original reply traffic reaches OVS bridge as is+the case of encap traffic flows, the OVS bridge knows about any reply packets dropped by OVS zone+conntrack, and can record them accordingly. With re-entrance traffic, the reply traffic with+original server Pod IPs does not reach OVS bridge, and any dropped traffic by host network+conntrack is unknown to the OVS bridge.  ++## Additional Works+1. Smoother transition in/out of Antrea in policy mode, Kubernetes deployment shall be easily+scaled up and down after/before Antrea insertion to allow Pods be added added to Antrea after +installation, and reconnect to old CNI topology after Antrea is uninstalled.+1. Reconciliation during mode change. Up to this point, encap is the pre-dominate mode. As more+traffic mode is used, it is reasonable for a cluster to change traffic encapsulation on the +flight, Some components routeClient, ovsConfigClient(??) may not be ready for it.

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNIi, container runtime then calls

done

suwang48404

comment created time in 2 months

push eventsuwang48404/antrea

Jianjun Shen

commit sha 2472bbe9a0d873380ebc5a89a02745a20cd0dfe9

Add some comments on Pod OVS flows and port deletion (#537)

view details

Jianjun Shen

commit sha 742ec4e89831e9fa080c8b157f047cc9b5d0632e

Add Controller "system" API group and controller-info antctl command Add "system" API group for APIs of exposing Antrea system information. Add "controllerinfos" resource to the "system" API group, which returns AntreaControllerInfo (reusing AntreaControllerInfo CRD struct). Add "get controller-info" antctl command that queries the "controllerinfos" resource and prints it out. Change "agent-info" command to "get agent-info".

view details

Jianjun Shen

commit sha 9c6c1aef2a12a3f156cf56c87e5ef5867956e987

antctl Controller version command to use Controller's system API Change Controller's "version" command to get AntreaControllerInfo from the "system" API group (not from the AntreaControllerInfo CRD).

view details

Jianjun Shen

commit sha eb83658c995368d53ac17eada0a0a03d663beaae

Fix antrea-eks.yml (#542) It is not updated correctly by previous commiets.

view details

Srikar Tati

commit sha cdf38761b64c1ab75a3828cc7b03bd80182b9000

Kind provider e2e tests: For e2e tests, kind provider only supports the default master node name ("kind-control-plane"). Fixed to run the e2e tests for any master node name. In addition, https://github.com/vmware-tanzu/antrea/blob/master/ci/kind/kind-setup.sh accepts any type of a cluster name. We should probably restrict naming convention to rfc1035/rfc1123. Otherwise, we do not align with kubernetes naming convention and following command breaks. go test -v github.com/vmware-tanzu/antrea/test/e2e -provider=kind Testing: Tested with cluster names such as stati_kind, statiKind, stati-kind, stati-kind01 etc.

view details

Antonin Bas

commit sha b3a5c279b18d2269581f9cea5341b6f07c28131b

Start documentation for antctl Add a new document under docs/ with information on how to install (instructions valid starting with release 0.5.0) and use antctl. This is user-facing documentation and does not include information about the antctl implementation. We also do not include a detailed list of commands at the moment. This list can come later if needed (can be hard to keep up-to-date), and we can keep adding information about specific commands in troubleshooting.md for specific debugging scenarios. Fixes #337

view details

Quan Tian

commit sha d95d2fb8830fc76c3540c7440e67a94a58e88a60

Improve NetworkPolicy logging (#533) * Added a few V0 logs for essential NetworkPolicy events like creation/deletion of computed NetworkPolicy. * Added NetworkPolicy context when logging rule that are being installed. * Adjusted a few logs that are in inconsistent style.

view details

Antonin Bas

commit sha 6d5b3f575ecd3a30ba6b56ce6a94aa4c89716032

Add e2e test for upgrade scenario The new Go test (TestUpgrade) is meant to be run through the wrapper script ./ci/kind/test-upgrade-antrea.sh. At the moment we test upgrade from versions 0.3.0, 0.4.0 and 0.4.1 as part of CI, but we can modify that set as we go through Antrea releases. We also run the test for every PR against master, which may be a bit excessive. Fixes #511

view details

Jianjun Shen

commit sha f2d0b2b5680e4aad587bcd7caf1410eaad882f57

Change antctl commands to follow K8s resource naming convention (#546) For example: controller-info -> controllerinfo, network-policy -> networkpolicy, applied-to-group -> appliedtogroup.

view details

Quan Tian

commit sha c40d8b60faaba567987a9373e71c4f493fe782f2

Check whether an object matches the watcher before generating init event (#543) It's observed in scale tests that antrea-controller had memory usage peak when agents were connecting to it, which is quite greater than the usage after agents connected it. For example, in a scale of 100k Pods, 75k NetworkPolicies, 25k Namespaces, 1k antrea-agents, it would consume 8~9GB memory at peak and lead to OOM sometimes while it only consumed 1.3GB memory when there were no agents connection and 1.5GB after 1k Nodes connected it. The memory should be consumed by a process in store.Watch where it lists all items (pointers) of a kind of resource, converts them to InternalEvent, sends them to watchers to further process. This means that every object will be converted once regardless of whether the watcher is interested in it or not, while the conversion is an expensive operation in memory usage and normally a watcher won't be interested in most objects, for example, in the above scale each agent only cares hundreds out of 75k NetworkPolicies. This patch optimizes it by filtering the object before converting it. As the filter is a very cheap hash query, it can save the memory and CPU usage a lot. For example, in a local scale test with 80 Pods, 30k NetworkPolicies, 10k Namespaces, 50 antrea-agents, it would consume 1.1~1.2GB memory at peak. With this patch, it would consume only 310~MB at peak.

view details

Antonin Bas

commit sha 1a73ecc05d75c17647120e3a3b2fd0ed52247eea

Do not use Github API to check tag There seems to be some rate limiting which creates issues when running an unauthenticated request as part of a Github workflow. Furthermore, using "grep -q" with "curl" sometimes lead to the "(23) Failed writing body" error being displayed when grep closes the pipe early. Not sure exactly why since we run curl in silent mode, but I have seen it in the workflow logs.

view details

Abhishek Raut

commit sha 1673df41e3c63cf48ea483b38ee843c185a6c456

Add spell check script to hack (#539) Add spell check script to hack Add a script to check commonly misspelled English words and add it to Github workflow

view details

Quan Tian

commit sha 77d8a3dfa1f674ef2b68dea81982b2d5861ec8b7

Fix typo in register (#549) It leads to `antctl get appliedtogroup` failing.

view details

Yiwei

commit sha 8563f2fde55dfb9282deb57078f123c2bfddcd50

Add table output formats for antctl get commands

view details

Rahul Jain

commit sha e3c0072db65f47240b21d8711003009b6322f8eb

Enable Antrea in GKE cluster for Ubuntu host (#532) * Enable Antrea in GKE cluster for Ubuntu host Antrea can be enabled as a CNI operating in noEncap mode and enforcing Network Policies in GKE cluster. This checkin enables the support. Also added details regarding deploying and configuring antrea in GKE cluster. Co-authored-by: Rahul Jain <rahulj@rahulj-a01.vmware.com> Co-authored-by: Jianjun Shen <shenj@vmware.com> Co-authored-by: Antonin Bas <antonin.bas@gmail.com>

view details

Antonin Bas

commit sha 47700aee013c31840efd80c7d6ed2f9ee79976c0

Automate asset upload when creating a release When creating a release, the necessary assets (yaml manifests and antctl binaries) will now be added to the release automatically by a Github workflow. This reduces the potential for human error. For antctl, we build and upload the following binaries: * linux: amd64, arm64, arm (arm/v7) * darwin: amd64 * windows: amd64 Fixes #312

view details

Antonin Bas

commit sha f08cd259ae3ad9efcfd38c0dbd985a65db1155f3

Update CHANGELOG for v0.5.0 release

view details

Antonin Bas

commit sha 955df74821fadc9e219ca6b08fe46834fc496a24

Set VERSION to v0.6.0-dev

view details

Abhishek Raut

commit sha 83037d87865f08a92cca60751021495ce004bee9

Do not ignore CHANGELOG for spell check (#551)

view details

Zhecheng Li

commit sha b10a5da57fd94e75c749a53e8a85b4450e9b5eaa

CI update on cleanup-job and log saving (#554) * disable cleanup-job's concurrency * save capv/capi log * remove unnecessary cleanup antrea before build, since workload clusters are new.

view details

push time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNIi, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host+route to the Pod to local host gateway(``gw0``) interface from the PtP device. This is+illustrated by the diagram on the right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge had not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge had not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+of underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-Service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it also supports Pod level+NetworkPolicy enforcement.++This means that a Pod-to-Service traffic flow needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. come back to OVS bridge for Pod Egress NetworkPolicy processing, and+1. go back to the host network yet again to be forwarded, if DNATed destination in 1) is an +inter-Node Pod or external network entity. ++We refer to the last traffic pattern as re-entrance traffic because in this pattern, a traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   +-- Entering Host Network(via gw0):     SP_IP/SPort->VIP/VPort +-- After LB(DNAT):                     SP_IP/SPort->DP_IP/TPort+-- After Route(to gw0):                SP_IP/SPort->DP_IP/TPort++request/forwrding: +-- Entering Host Network(via gw0):     SP_IP/SPort->DP_IP/TPort+-- After route(to uplink):             SP_IP/SPort->DP_IP/TPort++reply:+-- Entering Host Network(via uplink):  DP_IP/TPort -> SP_IP/SPort+-- After LB(DNAT):                     VIP/VPort->SP_IP/Sport+-- After route(to gw0):                VIP/VPort->SP_IP/Sport+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases.(This differs from encap traffic where all traffic flows including post LB+service traffic share the same ``main`` route table.) Antrea creates a customized+``antrea_service`` route table, it is used in+conjunction with ip-rule and ip-tables to handle service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from gw0. +1. At Antrea initialization, an ip-rule is added to select ``antrea_service`` route table as routing+table if traffic is marked in 1).+1. At Antrea initialization, a default route entry is added to ``antrea_service`` route table to+forward all traffic to ``gw0``.++The outcome may be something like this+```bash+ip neigh | grep gw0+169.254.253.1 dev gw0 lladdr 12:34:56:78:9a:bc PERMANENT++ip route show table 300 #tbl_idx=300 is antrea_service+default via 169.254.253.1 dev gw0 onlink ++ip rule | grep gw0+300:	from all fwmark 0x800/0x800 iif gw0 lookup 300 ++iptables -t mangle  -L ANTREA-MANGLE +Chain ANTREA-MANGLE (1 references)+target     prot opt source               destination         +MARK       all  --  anywhere             10.0.0.0/16          /* Antrea: mark service traffic */ MARK or 0x800+MARK       all  --  anywhere            !10.0.0.0/16          /* Antrea: unmark post LB service traffic */ MARK and 0x0+```++The above configuration allows Pod-to-Service traffic to use ``antrea_service`` route table after+load balancing, and to be steered back to OVS bridge for Pod NetworkPolicy processing.++#### Conntrack+Note also that with re-entrance traffic a service request, after being load balanced and routed+back to OVS bridge via ``gw0``, has exactly the same 5-tuple as when it re-enters the host network+for forwarding.++When a service request with same 5-tuples re-enters the host network, it confuses Linux conntrack. +The Linux considers the re-entrance IP packet from a new connection flow that uses same source port+that has been allocated in the DNAT connection. In turn, the re-entrance packet triggers+another SNAT connection. The overall effect is that the service's DNAT connection is not+discovered by the service reply, and no Un-DNAT takes place. As the result, the reply is not+recognized, and therefore dropped by the source Pod.+   +Antrea uses the following mechanisms to handle Pod-to-Service traffic re-entrance to the host+network, and bypasses conntrack in host network.+1. In OVS bridge, adds flow that marks any re-entrance traffic with a special source MAC.+1. In OVS bridge, adds flow that causes any re-entrance traffic to bypasses conntrack in OVS zone.+1. In the host network' ip-tables, adds a rule in ``raw`` table that if matching the special+source MAC in 1), bypass conntrack in host zone.++#### NetworkPolicy Considerations+Note that when a traffic flow is re-entrance, the original reply packets do not make into OVS,+it is un-DNATted in the host network before reaching OVS. Does it have any impact on NetworkPolicy+enforcement? + +Antrea enforces NetworkPolicy by allowing or disallowing initial connection packets (e.g. TCP+ sync) to go through and to establish connection. Once a connection is+established, Antrea relies on conntrack to admit or reject packets for that connection. This still +holds true for re-entrance traffic flows, except that conntrack takes place not within OVS conntrack+zone, but instead is in the host network's default conntrack zone. Hence NetworkPolicy+enforcement is not impacted. ++It has some effects on statistics collection. If original reply traffic reaches OVS bridge as is+the case of encap traffic flows, the OVS bridge knows about any reply packets dropped by OVS zone+conntrack, and can record them accordingly. With re-entrance traffic, the reply traffic with+original server Pod IPs does not reach OVS bridge, and any dropped traffic by host network+conntrack is unknown to the OVS bridge.  ++## Additional Works

why can I use plural as works?

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++The diagram on the left illustrates a routed CNI network topology such as AWS EKS.+In this topology a Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure Pod's+IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a +veth-pair. When Antrea is chained with this primary CNIi, container runtime then calls+Antrea, and the Antrea agent attaches Pod's PtP device to the OVS bridge, and moves the host+route to the Pod to local host gateway(``gw0``) interface from the PtP device. This is

exiting sentence seem make readable,

i.e move route .... to local host gateway

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. comes back to OVS bridge for Pod NetworkPolicy processing, and+1. if DNATed destination in 1) is an inter-Node Pod or external network entity, and goes back+to the host network yet again to be forwarded.++We refer the last traffic pattern as re-entrance traffic because in this pattern, an traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   SP_IP/SPort->VIP/VPort :: LB(DNAT)=>SP_IP/SPort->DP_IP/TPort :: Route =>gw0/OVS+request/forwrding: SP_IP/SPort->DP_IP/TPort :: Route=>uplink/out+reply:             DP_IP/TPort -> SP_IP/SPort :: LB (unDNAT) => VIP/VPort->SP_IP/Sport :: Route=>gw0/OVS+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases. Antrea creates a customized ``antrea_service`` route table, it is used in+conjunction with ip-rule and ip-tables to handle service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from gw0. +1. At Antrea initialization, an ip-rule is added to select ``antrea_service`` route table as routing+table if traffic is marked in 1).+1. At Antrea initialization, an default route entry is added to ``antrea_service`` route table to+forward all traffic to ``gw0``.++The outcome may be something like this+```bash+ip neigh | grep gw0+169.254.253.1 dev gw0 lladdr 12:34:56:78:9a:bc PERMANENT++ip route show table 300 #tbl_idx=300 is antrea_service+default via 169.254.253.1 dev gw0 onlink ++ip rule | grep gw0+300:	from all fwmark 0x800/0x800 iif gw0 lookup 300 ++iptables -t mangle  -L ANTREA-MANGLE +Chain ANTREA-MANGLE (1 references)+target     prot opt source               destination         +MARK       all  --  anywhere             10.0.0.0/16          /* Antrea: mark service traffic */ MARK or 0x800+MARK       all  --  anywhere            !10.0.0.0/16          /* Antrea: unmark post LB service traffic */ MARK and 0x0+```++The above configuration allows Pod-to-service traffic to use ``antrea_service`` route table after+load balancing, and to be steered back to OVS bridge for Pod NetworkPolicy processing.++#### Conntrack+Note also that with re-entrance traffic a service request, after being load balanced and routed+back to OVS bridge via ``gw0``, has exactly the same 5-tuple as when it re-enters the host network+for forwarding.++When a service request with same 5-tuples re-enters the host network, it confuses Linux conntrack. +The Linux considers the re-entrance IP packet from a new connection flow that uses same source port+that has been allocated in the DNAT connection. In turn, the re-entrance packet triggers+another SNAT connection. The overall effect is that the service's DNAT connection is not+discovered by the service reply, and no Un-DNAT takes place. As the result, the reply is not+recognized, and therefore is dropped by the source Pod.+   +Antrea uses the following mechanisms to handle Pod-to-Service traffic re-entrance to the host+network, and bypasses conntrack in host network.+1. In OVS bridge, adds flow that marks any re-entrance traffic with a special source MAC.+1. In OVS bridge, adds flow that causes any re-entrance traffic to bypasses conntrack in OVS zone.+1. In the host network' ipt-tables, adds a rule in ``raw`` table that if matching the special+source MAC in 1), bypass conntrack in host zone.++#### NetworkPolicy Considerations+Note that when a traffic flow is re-entrance, the original reply packets do not make into OVS,+it is un-DATted in the host network before reaching OVS. Does it have any impact on NetworkPolicy+enforcement? + +Antrea enforces NetworkPolicy by allowing or disallowing initial connection packets (e.g. TCP+ sync) to go through and to establish connection. Once a connection is+established, Antrea relies on conntrack to admit or reject packets for that connection. This still +holds true for re-entrance traffic flows, except that conntrack takes place not within OVS conntrack+zone, but instead is in the host network's default conntrack zone. Hence NetworkPolicy+enforcement is not impacted. ++It has some effects on statistics collection. If original reply traffic reaches OVS bridge as in+the case of encap traffic flows, the OVS bridge knows about any reply packets dropped by OVS zone+conntrack, and can records them accordingly. With re-entrance traffic, the reply traffic with

done

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. comes back to OVS bridge for Pod NetworkPolicy processing, and+1. if DNATed destination in 1) is an inter-Node Pod or external network entity, and goes back+to the host network yet again to be forwarded.++We refer the last traffic pattern as re-entrance traffic because in this pattern, an traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   SP_IP/SPort->VIP/VPort :: LB(DNAT)=>SP_IP/SPort->DP_IP/TPort :: Route =>gw0/OVS+request/forwrding: SP_IP/SPort->DP_IP/TPort :: Route=>uplink/out+reply:             DP_IP/TPort -> SP_IP/SPort :: LB (unDNAT) => VIP/VPort->SP_IP/Sport :: Route=>gw0/OVS+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases. Antrea creates a customized ``antrea_service`` route table, it is used in+conjunction with ip-rule and ip-tables to handle service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from gw0. +1. At Antrea initialization, an ip-rule is added to select ``antrea_service`` route table as routing+table if traffic is marked in 1).+1. At Antrea initialization, an default route entry is added to ``antrea_service`` route table to+forward all traffic to ``gw0``.++The outcome may be something like this+```bash+ip neigh | grep gw0+169.254.253.1 dev gw0 lladdr 12:34:56:78:9a:bc PERMANENT++ip route show table 300 #tbl_idx=300 is antrea_service+default via 169.254.253.1 dev gw0 onlink ++ip rule | grep gw0+300:	from all fwmark 0x800/0x800 iif gw0 lookup 300 ++iptables -t mangle  -L ANTREA-MANGLE +Chain ANTREA-MANGLE (1 references)+target     prot opt source               destination         +MARK       all  --  anywhere             10.0.0.0/16          /* Antrea: mark service traffic */ MARK or 0x800+MARK       all  --  anywhere            !10.0.0.0/16          /* Antrea: unmark post LB service traffic */ MARK and 0x0+```++The above configuration allows Pod-to-service traffic to use ``antrea_service`` route table after+load balancing, and to be steered back to OVS bridge for Pod NetworkPolicy processing.++#### Conntrack+Note also that with re-entrance traffic a service request, after being load balanced and routed+back to OVS bridge via ``gw0``, has exactly the same 5-tuple as when it re-enters the host network+for forwarding.++When a service request with same 5-tuples re-enters the host network, it confuses Linux conntrack. +The Linux considers the re-entrance IP packet from a new connection flow that uses same source port+that has been allocated in the DNAT connection. In turn, the re-entrance packet triggers+another SNAT connection. The overall effect is that the service's DNAT connection is not+discovered by the service reply, and no Un-DNAT takes place. As the result, the reply is not+recognized, and therefore is dropped by the source Pod.+   +Antrea uses the following mechanisms to handle Pod-to-Service traffic re-entrance to the host+network, and bypasses conntrack in host network.+1. In OVS bridge, adds flow that marks any re-entrance traffic with a special source MAC.+1. In OVS bridge, adds flow that causes any re-entrance traffic to bypasses conntrack in OVS zone.+1. In the host network' ipt-tables, adds a rule in ``raw`` table that if matching the special+source MAC in 1), bypass conntrack in host zone.++#### NetworkPolicy Considerations+Note that when a traffic flow is re-entrance, the original reply packets do not make into OVS,+it is un-DATted in the host network before reaching OVS. Does it have any impact on NetworkPolicy+enforcement? + +Antrea enforces NetworkPolicy by allowing or disallowing initial connection packets (e.g. TCP+ sync) to go through and to establish connection. Once a connection is+established, Antrea relies on conntrack to admit or reject packets for that connection. This still +holds true for re-entrance traffic flows, except that conntrack takes place not within OVS conntrack+zone, but instead is in the host network's default conntrack zone. Hence NetworkPolicy+enforcement is not impacted. ++It has some effects on statistics collection. If original reply traffic reaches OVS bridge as in

done

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. comes back to OVS bridge for Pod NetworkPolicy processing, and+1. if DNATed destination in 1) is an inter-Node Pod or external network entity, and goes back+to the host network yet again to be forwarded.++We refer the last traffic pattern as re-entrance traffic because in this pattern, an traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   SP_IP/SPort->VIP/VPort :: LB(DNAT)=>SP_IP/SPort->DP_IP/TPort :: Route =>gw0/OVS+request/forwrding: SP_IP/SPort->DP_IP/TPort :: Route=>uplink/out+reply:             DP_IP/TPort -> SP_IP/SPort :: LB (unDNAT) => VIP/VPort->SP_IP/Sport :: Route=>gw0/OVS+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases. Antrea creates a customized ``antrea_service`` route table, it is used in+conjunction with ip-rule and ip-tables to handle service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from gw0. +1. At Antrea initialization, an ip-rule is added to select ``antrea_service`` route table as routing+table if traffic is marked in 1).+1. At Antrea initialization, an default route entry is added to ``antrea_service`` route table to+forward all traffic to ``gw0``.++The outcome may be something like this+```bash+ip neigh | grep gw0+169.254.253.1 dev gw0 lladdr 12:34:56:78:9a:bc PERMANENT++ip route show table 300 #tbl_idx=300 is antrea_service+default via 169.254.253.1 dev gw0 onlink ++ip rule | grep gw0+300:	from all fwmark 0x800/0x800 iif gw0 lookup 300 ++iptables -t mangle  -L ANTREA-MANGLE +Chain ANTREA-MANGLE (1 references)+target     prot opt source               destination         +MARK       all  --  anywhere             10.0.0.0/16          /* Antrea: mark service traffic */ MARK or 0x800+MARK       all  --  anywhere            !10.0.0.0/16          /* Antrea: unmark post LB service traffic */ MARK and 0x0+```++The above configuration allows Pod-to-service traffic to use ``antrea_service`` route table after+load balancing, and to be steered back to OVS bridge for Pod NetworkPolicy processing.++#### Conntrack+Note also that with re-entrance traffic a service request, after being load balanced and routed+back to OVS bridge via ``gw0``, has exactly the same 5-tuple as when it re-enters the host network+for forwarding.++When a service request with same 5-tuples re-enters the host network, it confuses Linux conntrack. +The Linux considers the re-entrance IP packet from a new connection flow that uses same source port+that has been allocated in the DNAT connection. In turn, the re-entrance packet triggers+another SNAT connection. The overall effect is that the service's DNAT connection is not+discovered by the service reply, and no Un-DNAT takes place. As the result, the reply is not+recognized, and therefore is dropped by the source Pod.+   +Antrea uses the following mechanisms to handle Pod-to-Service traffic re-entrance to the host+network, and bypasses conntrack in host network.+1. In OVS bridge, adds flow that marks any re-entrance traffic with a special source MAC.+1. In OVS bridge, adds flow that causes any re-entrance traffic to bypasses conntrack in OVS zone.+1. In the host network' ipt-tables, adds a rule in ``raw`` table that if matching the special+source MAC in 1), bypass conntrack in host zone.++#### NetworkPolicy Considerations+Note that when a traffic flow is re-entrance, the original reply packets do not make into OVS,+it is un-DATted in the host network before reaching OVS. Does it have any impact on NetworkPolicy

done

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. comes back to OVS bridge for Pod NetworkPolicy processing, and+1. if DNATed destination in 1) is an inter-Node Pod or external network entity, and goes back+to the host network yet again to be forwarded.++We refer the last traffic pattern as re-entrance traffic because in this pattern, an traffic flow

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. comes back to OVS bridge for Pod NetworkPolicy processing, and+1. if DNATed destination in 1) is an inter-Node Pod or external network entity, and goes back+to the host network yet again to be forwarded.++We refer the last traffic pattern as re-entrance traffic because in this pattern, an traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   SP_IP/SPort->VIP/VPort :: LB(DNAT)=>SP_IP/SPort->DP_IP/TPort :: Route =>gw0/OVS+request/forwrding: SP_IP/SPort->DP_IP/TPort :: Route=>uplink/out+reply:             DP_IP/TPort -> SP_IP/SPort :: LB (unDNAT) => VIP/VPort->SP_IP/Sport :: Route=>gw0/OVS+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases. Antrea creates a customized ``antrea_service`` route table, it is used in+conjunction with ip-rule and ip-tables to handle service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from gw0. +1. At Antrea initialization, an ip-rule is added to select ``antrea_service`` route table as routing+table if traffic is marked in 1).+1. At Antrea initialization, an default route entry is added to ``antrea_service`` route table to+forward all traffic to ``gw0``.++The outcome may be something like this+```bash+ip neigh | grep gw0+169.254.253.1 dev gw0 lladdr 12:34:56:78:9a:bc PERMANENT++ip route show table 300 #tbl_idx=300 is antrea_service+default via 169.254.253.1 dev gw0 onlink ++ip rule | grep gw0+300:	from all fwmark 0x800/0x800 iif gw0 lookup 300 ++iptables -t mangle  -L ANTREA-MANGLE +Chain ANTREA-MANGLE (1 references)+target     prot opt source               destination         +MARK       all  --  anywhere             10.0.0.0/16          /* Antrea: mark service traffic */ MARK or 0x800+MARK       all  --  anywhere            !10.0.0.0/16          /* Antrea: unmark post LB service traffic */ MARK and 0x0+```++The above configuration allows Pod-to-service traffic to use ``antrea_service`` route table after+load balancing, and to be steered back to OVS bridge for Pod NetworkPolicy processing.++#### Conntrack+Note also that with re-entrance traffic a service request, after being load balanced and routed+back to OVS bridge via ``gw0``, has exactly the same 5-tuple as when it re-enters the host network+for forwarding.++When a service request with same 5-tuples re-enters the host network, it confuses Linux conntrack. +The Linux considers the re-entrance IP packet from a new connection flow that uses same source port+that has been allocated in the DNAT connection. In turn, the re-entrance packet triggers+another SNAT connection. The overall effect is that the service's DNAT connection is not+discovered by the service reply, and no Un-DNAT takes place. As the result, the reply is not+recognized, and therefore is dropped by the source Pod.+   +Antrea uses the following mechanisms to handle Pod-to-Service traffic re-entrance to the host+network, and bypasses conntrack in host network.+1. In OVS bridge, adds flow that marks any re-entrance traffic with a special source MAC.+1. In OVS bridge, adds flow that causes any re-entrance traffic to bypasses conntrack in OVS zone.+1. In the host network' ipt-tables, adds a rule in ``raw`` table that if matching the special

done

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. comes back to OVS bridge for Pod NetworkPolicy processing, and+1. if DNATed destination in 1) is an inter-Node Pod or external network entity, and goes back+to the host network yet again to be forwarded.++We refer the last traffic pattern as re-entrance traffic because in this pattern, an traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   SP_IP/SPort->VIP/VPort :: LB(DNAT)=>SP_IP/SPort->DP_IP/TPort :: Route =>gw0/OVS+request/forwrding: SP_IP/SPort->DP_IP/TPort :: Route=>uplink/out+reply:             DP_IP/TPort -> SP_IP/SPort :: LB (unDNAT) => VIP/VPort->SP_IP/Sport :: Route=>gw0/OVS+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases. Antrea creates a customized ``antrea_service`` route table, it is used in+conjunction with ip-rule and ip-tables to handle service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from gw0. +1. At Antrea initialization, an ip-rule is added to select ``antrea_service`` route table as routing+table if traffic is marked in 1).+1. At Antrea initialization, an default route entry is added to ``antrea_service`` route table to+forward all traffic to ``gw0``.++The outcome may be something like this+```bash+ip neigh | grep gw0+169.254.253.1 dev gw0 lladdr 12:34:56:78:9a:bc PERMANENT++ip route show table 300 #tbl_idx=300 is antrea_service+default via 169.254.253.1 dev gw0 onlink ++ip rule | grep gw0+300:	from all fwmark 0x800/0x800 iif gw0 lookup 300 ++iptables -t mangle  -L ANTREA-MANGLE +Chain ANTREA-MANGLE (1 references)+target     prot opt source               destination         +MARK       all  --  anywhere             10.0.0.0/16          /* Antrea: mark service traffic */ MARK or 0x800+MARK       all  --  anywhere            !10.0.0.0/16          /* Antrea: unmark post LB service traffic */ MARK and 0x0+```++The above configuration allows Pod-to-service traffic to use ``antrea_service`` route table after

done

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. comes back to OVS bridge for Pod NetworkPolicy processing, and+1. if DNATed destination in 1) is an inter-Node Pod or external network entity, and goes back+to the host network yet again to be forwarded.++We refer the last traffic pattern as re-entrance traffic because in this pattern, an traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   SP_IP/SPort->VIP/VPort :: LB(DNAT)=>SP_IP/SPort->DP_IP/TPort :: Route =>gw0/OVS+request/forwrding: SP_IP/SPort->DP_IP/TPort :: Route=>uplink/out+reply:             DP_IP/TPort -> SP_IP/SPort :: LB (unDNAT) => VIP/VPort->SP_IP/Sport :: Route=>gw0/OVS+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases. Antrea creates a customized ``antrea_service`` route table, it is used in+conjunction with ip-rule and ip-tables to handle service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from gw0. +1. At Antrea initialization, an ip-rule is added to select ``antrea_service`` route table as routing+table if traffic is marked in 1).+1. At Antrea initialization, an default route entry is added to ``antrea_service`` route table to+forward all traffic to ``gw0``.++The outcome may be something like this+```bash+ip neigh | grep gw0+169.254.253.1 dev gw0 lladdr 12:34:56:78:9a:bc PERMANENT++ip route show table 300 #tbl_idx=300 is antrea_service+default via 169.254.253.1 dev gw0 onlink ++ip rule | grep gw0+300:	from all fwmark 0x800/0x800 iif gw0 lookup 300 ++iptables -t mangle  -L ANTREA-MANGLE +Chain ANTREA-MANGLE (1 references)+target     prot opt source               destination         +MARK       all  --  anywhere             10.0.0.0/16          /* Antrea: mark service traffic */ MARK or 0x800+MARK       all  --  anywhere            !10.0.0.0/16          /* Antrea: unmark post LB service traffic */ MARK and 0x0+```++The above configuration allows Pod-to-service traffic to use ``antrea_service`` route table after+load balancing, and to be steered back to OVS bridge for Pod NetworkPolicy processing.++#### Conntrack+Note also that with re-entrance traffic a service request, after being load balanced and routed+back to OVS bridge via ``gw0``, has exactly the same 5-tuple as when it re-enters the host network+for forwarding.++When a service request with same 5-tuples re-enters the host network, it confuses Linux conntrack. +The Linux considers the re-entrance IP packet from a new connection flow that uses same source port+that has been allocated in the DNAT connection. In turn, the re-entrance packet triggers+another SNAT connection. The overall effect is that the service's DNAT connection is not+discovered by the service reply, and no Un-DNAT takes place. As the result, the reply is not+recognized, and therefore is dropped by the source Pod.

done

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. comes back to OVS bridge for Pod NetworkPolicy processing, and+1. if DNATed destination in 1) is an inter-Node Pod or external network entity, and goes back+to the host network yet again to be forwarded.++We refer the last traffic pattern as re-entrance traffic because in this pattern, an traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   SP_IP/SPort->VIP/VPort :: LB(DNAT)=>SP_IP/SPort->DP_IP/TPort :: Route =>gw0/OVS+request/forwrding: SP_IP/SPort->DP_IP/TPort :: Route=>uplink/out+reply:             DP_IP/TPort -> SP_IP/SPort :: LB (unDNAT) => VIP/VPort->SP_IP/Sport :: Route=>gw0/OVS+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases. Antrea creates a customized ``antrea_service`` route table, it is used in+conjunction with ip-rule and ip-tables to handle service traffic. Together they work as follows+1. At Antrea initialization, an ip-tables rule is created in ``mangle table`` that marks IP packets+with service IP as destination IP and are from gw0. +1. At Antrea initialization, an ip-rule is added to select ``antrea_service`` route table as routing+table if traffic is marked in 1).+1. At Antrea initialization, an default route entry is added to ``antrea_service`` route table to

done

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. comes back to OVS bridge for Pod NetworkPolicy processing, and+1. if DNATed destination in 1) is an inter-Node Pod or external network entity, and goes back+to the host network yet again to be forwarded.++We refer the last traffic pattern as re-entrance traffic because in this pattern, an traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. comes back to OVS bridge for Pod NetworkPolicy processing, and+1. if DNATed destination in 1) is an inter-Node Pod or external network entity, and goes back+to the host network yet again to be forwarded.

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. comes back to OVS bridge for Pod NetworkPolicy processing, and

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. comes back to OVS bridge for Pod NetworkPolicy processing, and

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level

done

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:

done

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. 

done

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a

stated this paragraph is for routed CNI only,.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+<?xml version="1.0" encoding="UTF-8"?>

changes C to P.

suwang48404

comment created time in 2 months

push eventsuwang48404/antrea

Su Wang

commit sha 6ef82bf0541da5296705ab01e47a279637c8640a

Added design doc for policy-only mode.

view details

push time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being

again, this paragraph describes routed cni without antrea.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a

This paragraph explains how routed cni works without Antrea/ovs being inserted, the next paragraph describes what happens when ovs is inserted.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. comes back to OVS bridge for Pod NetworkPolicy processing, and+1. if DNATed destination in 1) is an inter-Node Pod or external network entity, and goes back+to the host network yet again to be forwarded.++We refer the last traffic pattern as re-entrance traffic because in this pattern, an traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like++```+request/service:   SP_IP/SPort->VIP/VPort :: LB(DNAT)=>SP_IP/SPort->DP_IP/TPort :: Route =>gw0/OVS+request/forwrding: SP_IP/SPort->DP_IP/TPort :: Route=>uplink/out+reply:             DP_IP/TPort -> SP_IP/SPort :: LB (unDNAT) => VIP/VPort->SP_IP/Sport :: Route=>gw0/OVS+```++#### Routing +Note that the request with destination IP DP_IP needs to be routed differently in LB and +forwarding cases. Antrea creates a customized ``antrea_service`` route table, it is used in

"antrea_service" is the name of route table Antrea agent creates on worker Node, should I highlight it?

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. comes back to OVS bridge for Pod NetworkPolicy processing, and+1. if DNATed destination in 1) is an inter-Node Pod or external network entity, and goes back+to the host network yet again to be forwarded.++We refer the last traffic pattern as re-entrance traffic because in this pattern, an traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.++Specifically a re-entrance traffic may be traced as follows:<br>+Denote VIP as cluster IP of a service, SP_IP/DP_IP as respective client and server Pod IP; +VPort as service port of a service, TPort as target port of server Pod, and SPort as original +source port. The service request's 5-tuples upon first and second entrance to the host network, and+its reply's 5-tuples would be like

can u give a visual example with list ?

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Added design doc for policy-only mode.

+# Running Antrea In Policy Only Mode++Antrea supports chaining with routed CNI implementations such as EKS CNI. In this mode, Antrea+enforces Kubernetes NetworkPolicy, and delegates Pod IP management and network connectivity to the+primary CNI.++## Design++Antrea is designed to work as NetworkPolicy plug-in for the routed CNIs. For as long as a CNI+implementation fits into this model, Antrea may be inserted to enforce NetworkPolicy in that+CNI's environment.++<img src="/docs/assets/policy-only-cni.svg" width="600" alt="Antrea Switched CNI">++The above diagram depicts a routed CNI network topology on the left, and what it looks like +after Antrea inserts the OVS bridge into the data path.++In a routed CNI network topology such as AWS EKS, each Pod connects to the host network via a+point-to-point(PtP) like device, such as (but not limited to) a veth-pair. On the host network, a+host route with corresponding Pod's IP address as destination is created on each PtP device. Within+each Pod, routes are configured to ensure all outgoing traffic is sent over this PtP device, and+incoming traffic is received on this PtP device. This is a spoke-and-hub model, where to/from Pod+traffic, even within the same worker Node must traverse first to the host network and being+routed by it.++When a Pod is instantiated, the container runtime first calls the primary CNI to configure+Pod's IP, route table, DNS etc, and then connects Pod to host network with a PtP device such as a+veth-pair. When CNI chaining is configured, container runtime then calls Antrea, and the Antrea+agent attaches Pod's PtP device to the OVS bridge, and moves the host route to the Pod to local+host gateway(``gw0``) interface from the PtP device. This is illustrated by the diagram on the+right.++Antrea needs to satisfy that +1. All IP packets, sent on ``gw0`` in the host network, are received by the Pods exactly the same+as if the OVS bridge has not been inserted. +1. Similarly all IP packets, sent by Pods, are received by other Pods or the host network exactly+the same as if OVS bridge has not been inserted.+1. There are no requirements on Pod MAC addresses as all MAC addresses stays within the OVS bridge.++To satisfy the above requirements, Antrea needs no knowledge of Pod's network configurations nor+nor underlying CNI network, it simply needs to program the following OVS flows on the OVS bridge:+1. A default ARP responder flow that answers any ARP request. Its sole purpose is so that a Pod's+neighbor may be resolved, and packets may be sent by that Pod to that neighbor.+1. IP packets are routed based on its destination IP if it matches any local Pod's IP.+1. All other IP packets are routed to host network via gw0 interface.++These flows together handle all Pod traffic patterns with exception of Pod-to-Service traffic+that we will address next.++## Handling Pod-To-Service+The discussion in this section is relevant also to Pod-to-Service traffic in NoEncap traffic+mode. Antrea applies same principle to handle Pod-to-service traffic in all traffic modes where+traffic requires no encapsulation.++Antrea uses kube-proxy for load balancing. At the same time, it supports also Pod level+NetworkPolicy enforcement.++This means that for any Pod-to-Service traffic flow, it needs to  +1. first traverse to the host network for load balancing (DNAT), then+1. comes back to OVS bridge for Pod NetworkPolicy processing, and+1. if DNATed destination in 1) is an inter-Node Pod or external network entity, and goes back+to the host network yet again to be forwarded.++We refer the last traffic pattern as re-entrance traffic because in this pattern, an traffic flow+enters host network twice -- first time for load balancing, and second time for forwarding.+

the difference is that service traffic is encap mode does not re-enter host network with same 5-tuple.

suwang48404

comment created time in 2 months

pull request commentvmware-tanzu/antrea

Added design doc for policy-only mode.

/test-all

suwang48404

comment created time in 2 months

pull request commentvmware-tanzu/antrea

Added design doc for policy-only mode.

@jianjuns @abhiraut @tnqn @antoninbas

please review thx.

suwang48404

comment created time in 2 months

PR opened vmware-tanzu/antrea

Added design doc for policy-only mode.
+292 -0

0 comment

2 changed files

pr created time in 2 months

push eventsuwang48404/antrea

Wenying Dong

commit sha 847d92cb45dc883f6e91ef5f0aef5252a6cdc2b9

Support NAT functions in Antrea (#489) Expand Antrea Openflow control interface to support NAT/SNAT/DNAT functions in conntrack.

view details

Zhang Yiwei

commit sha 82cbaa72e0df17a536180412b611c45f02a39452

Add health checks to antrea components (#476)

view details

Quan Tian

commit sha ec1c21ae31660f49af3304f477b0c48f9e6958e0

Refactor route interface (#462) Currently there are separate route and iptables interfaces deeply coupled with each other while they are serving the same purpose: route container traffic in host network. The routing and NAT decision need cooperation between ip route, iptables, ip rule which are currently coordinated by its clients: AgentInitilizer and NodeRouteController, and because of there are linux specific concept, it makes windows implementation has to make a dummy implementation to map them. This patch encapsulates the host routing logic into a single interface with platform generic methods. With it: 1. The platform specific implementation detail of iptables, ip route, ip rule, ipset are hidden from its clients. 2. Some deeply coupled logic is encapsulated in a single struct, e.g. marking packets in iptables then matching the mark when routing. Besides, it enhances several implementations: 1. Instead of an iptables rule per CIDR, it uses ipset to match pod-to-external traffic, which has better performance and reduces the management complication: no dependency on encap mode, no need to update iptables at runtime. 2. It fixes the bug that iptables rules are not removed when nodes leave the cluster. 3. It uses iptables-restore to update antrea managed chains, getting rid of string matching when cleaning up stale rules.

view details

Quan Tian

commit sha 5aeefb151856b81d08381f387cf9a2f7ffa55fcc

Ignore image prune operation error (#493) The image prune operation cannot be executed in parallel. It could fail when another build is doing it and terminate the build, ignore the failure as dangling images will be cleaned up by another build or following builds anyway.

view details

jay vyas

commit sha 1215db0e11ae818aad9e6f522bdb9ad1a92e5140

Add a unit test for the policy spec builder, and fix the way we handle protocols

view details

jay vyas

commit sha 3caff3f846fea615898f4e54e328e375b0563512

Update all tests to make the protocol explicit (TCP)

view details

Abhishek Raut

commit sha 98104d88cd3e02ee745c371c8c95e1c8c5658e47

Do not log the entire Pod object (#498) Logging the entire Pod object in netpol tests adds a lot noise. Only log the Pod's Namespace and Name.

view details

Quan Tian

commit sha b16bb67ed8528bf174bfd39fd191828cac0dc0e4

Fix flaky unit test TestRamStoreWatchTimeout (#484) The test was flaky because terminating watchers is asynchronous and it could take some extra time to receive the terminated notification. The extra time was set to 10 ms which turned out to be not safe enough, this patch increases it to 100 ms.

view details

Zhengsheng Zhou

commit sha 618c11cc54490825b221c504f9f72ea5fd7276d8

CI: Wait for Machines to be Actually Deleted in Cleanup Job (#485) (#495) Due to CAPV issue [1], if Cluster and VSphereCluster are deleted before Machines, the Machines cannot be deleted. In this patch the Jenkins job firstly deletes Machines and waits for them to disappear, then it deletes VSphereCluster and Cluster. This deletion order is the reverse order of the resource creation. It should minimize the chance of Cluster API resources getting stuck in deleting state. [1] https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/issues/761 Co-authored-by: Zhengsheng Zhou <zhengshengz@vmware.com>

view details

Antonin Bas

commit sha 3d7ab16e64bb3464d1548f072846594786548bd8

Re-enable TestPodConnectivityAfterAntreaRestart on Kind We make the test framework more robust by: * scheduling CoreDNS Pods on the Kind control-plane Node to avoid connectivity issues to CoreDNS when restarting the Antrea Agent on the Kind worker Node. * ensuring that all CoreDNS Pods are restarted (on every type of cluster, not just Kind) when all Antrea Agents are restarted (e.g. because of an update). Fixes #244

view details

Antonin Bas

commit sha 6bb547dd98d5b583f8531f9d5ed1559be1acad05

Run linters on netpol Go code as part of CI We also remove the `test-fmt` target since it has been replaced by `golangci`. It is still possible to use `make fmt` to apply `gofmt` changes on the entire code base. Because `golangci-lint` can only be used in the scope of a single module (apparently), we need a separate `golangci` make target in hack/netpol. This is probably better anyway as it helps keep the Antrea core code and the netpol code separate, which is why they are different modules in the first place.

view details

Abhishek Raut

commit sha 938169e2b270049904b4ffa835ffaffde7a044ad

Add note regarding kube-proxy cluster-cidr flag (#492) Add a note to requirements regarding the necessity of cluster-cidr flag while starting kube-proxy during installation.

view details

Zhecheng Li

commit sha 2f7683df3f89e6dc2915d308d4235e6807778366

Refactor antctl error message (#509) Turn error message into a more friendly one, including http.StatusNotFound, http.StatusInternalServerError.

view details

Zhecheng Li

commit sha 3d5e14c2cb0457935b6567aa163755d3504f8376

Implemented antctl subcommand: pod-interface (#334) * This command is under get group to get pod interface information. Users can filter the result with "podName" as an arg or "namespace" as an flag. The default namespace is "default". * Remove duplicated flag in commandDefinition. * Turn isSingle field in nonResourceEndpoint to outputType which is an enum, including default, single and multiple Co-Authored-By: Antonin Bas <antonin.bas@gmail.com>

view details

Quan Tian

commit sha fe122387dbf06c8d0d7e98e4ca9cda333693c10f

Document debugging antrea controller/agent API server (#510)

view details

Abhishek Raut

commit sha 244b8319286c8224268a1721a0f3ed7b784dae31

Update Kind cluster setup doc (#518) Move the short cut section to the top since most users are more interested in getting a cluster up and running quick.

view details

Antonin Bas

commit sha 82a4c93a3f05c5202abd7d75a22ee633ea54309a

Update beta.kubernetes.io/os to kubernetes.io/os Starting with K8s 1.18, beta.kubernetes.io/os is removed. kubernetes.io/os is supported from K8s 1.14. Fixes #361

view details

Jianjun Shen

commit sha 2273f56321e35d7d08122c8937e741c8880034c7

Add antctl role to Controller to allow antctl to run in Controller Pod (#519) antctl uses controller ServiceAccount token to authenticate with controller API when running inside the Antrea controller Pod. This commit adds the antctl role to the Controller ServiceAccount, so antctl has all permissions to call the controller APIs for the controller commands.

view details

Su Wang

commit sha 03bf183cae054255628dc179b8a5393ebc8e7de3

supports policy only mode. (#449)

view details

Su Wang

commit sha e7d8502d116f66260504433b3e14a82f1c8e2410

Added terraform script to create eks. see docs/cloud.md for setting up AKS/EKS clusters, and install Antrea over them.

view details

push time in 2 months

issue openedvmware-tanzu/antrea

NetworkPolicy for External Service not working in Policy-Only mode.

Describe the bug NetworkPolicy for External Service not working in Policy-Only mode due to masquerading.

If Endpoint(s) of a Service is some external IP, service request is also masqueraded with Node IP when enters OVS bridge after LB by the host network. This is required so that so the reply traffic may reach the Node, and be un-DNATed in the host network.

This also means service request lost the original source IP address, therefore NetworkPolicy based on source Pod IP cannot be applied to external services.

Potential Solution:

  • the trick is to recover original src IP, this may be possible by looking up contrack main zone.
  • once original src IP is discovered, additional flows for each NetworkPolicy flow may be added to check for original src IP.

created time in 2 months

delete branch suwang48404/antrea

delete branch : cloud

delete time in 2 months

delete branch suwang48404/antrea

delete branch : eks

delete time in 2 months

pull request commentvmware-tanzu/antrea

Added terraform script to create EKS

@antoninbas

https://github.com/vmware-tanzu/antrea/pull/449 is merged. Can u please help merge this PR too, as I don't have the write privilege for Antrea. thx, Su

suwang48404

comment created time in 2 months

pull request commentvmware-tanzu/antrea

Antrea supports pass-through/policy-only mode

/test-e2e

suwang48404

comment created time in 2 months

pull request commentvmware-tanzu/antrea

Antrea supports pass-through/policy-only mode

/test-e2e

suwang48404

comment created time in 2 months

pull request commentvmware-tanzu/antrea

Antrea supports pass-through/policy-only mode

/test-all

suwang48404

comment created time in 2 months

push eventsuwang48404/antrea

Quan Tian

commit sha b16bb67ed8528bf174bfd39fd191828cac0dc0e4

Fix flaky unit test TestRamStoreWatchTimeout (#484) The test was flaky because terminating watchers is asynchronous and it could take some extra time to receive the terminated notification. The extra time was set to 10 ms which turned out to be not safe enough, this patch increases it to 100 ms.

view details

Zhengsheng Zhou

commit sha 618c11cc54490825b221c504f9f72ea5fd7276d8

CI: Wait for Machines to be Actually Deleted in Cleanup Job (#485) (#495) Due to CAPV issue [1], if Cluster and VSphereCluster are deleted before Machines, the Machines cannot be deleted. In this patch the Jenkins job firstly deletes Machines and waits for them to disappear, then it deletes VSphereCluster and Cluster. This deletion order is the reverse order of the resource creation. It should minimize the chance of Cluster API resources getting stuck in deleting state. [1] https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/issues/761 Co-authored-by: Zhengsheng Zhou <zhengshengz@vmware.com>

view details

Antonin Bas

commit sha 3d7ab16e64bb3464d1548f072846594786548bd8

Re-enable TestPodConnectivityAfterAntreaRestart on Kind We make the test framework more robust by: * scheduling CoreDNS Pods on the Kind control-plane Node to avoid connectivity issues to CoreDNS when restarting the Antrea Agent on the Kind worker Node. * ensuring that all CoreDNS Pods are restarted (on every type of cluster, not just Kind) when all Antrea Agents are restarted (e.g. because of an update). Fixes #244

view details

Antonin Bas

commit sha 6bb547dd98d5b583f8531f9d5ed1559be1acad05

Run linters on netpol Go code as part of CI We also remove the `test-fmt` target since it has been replaced by `golangci`. It is still possible to use `make fmt` to apply `gofmt` changes on the entire code base. Because `golangci-lint` can only be used in the scope of a single module (apparently), we need a separate `golangci` make target in hack/netpol. This is probably better anyway as it helps keep the Antrea core code and the netpol code separate, which is why they are different modules in the first place.

view details

Abhishek Raut

commit sha 938169e2b270049904b4ffa835ffaffde7a044ad

Add note regarding kube-proxy cluster-cidr flag (#492) Add a note to requirements regarding the necessity of cluster-cidr flag while starting kube-proxy during installation.

view details

Zhecheng Li

commit sha 2f7683df3f89e6dc2915d308d4235e6807778366

Refactor antctl error message (#509) Turn error message into a more friendly one, including http.StatusNotFound, http.StatusInternalServerError.

view details

Zhecheng Li

commit sha 3d5e14c2cb0457935b6567aa163755d3504f8376

Implemented antctl subcommand: pod-interface (#334) * This command is under get group to get pod interface information. Users can filter the result with "podName" as an arg or "namespace" as an flag. The default namespace is "default". * Remove duplicated flag in commandDefinition. * Turn isSingle field in nonResourceEndpoint to outputType which is an enum, including default, single and multiple Co-Authored-By: Antonin Bas <antonin.bas@gmail.com>

view details

Quan Tian

commit sha fe122387dbf06c8d0d7e98e4ca9cda333693c10f

Document debugging antrea controller/agent API server (#510)

view details

Abhishek Raut

commit sha 244b8319286c8224268a1721a0f3ed7b784dae31

Update Kind cluster setup doc (#518) Move the short cut section to the top since most users are more interested in getting a cluster up and running quick.

view details

Antonin Bas

commit sha 82a4c93a3f05c5202abd7d75a22ee633ea54309a

Update beta.kubernetes.io/os to kubernetes.io/os Starting with K8s 1.18, beta.kubernetes.io/os is removed. kubernetes.io/os is supported from K8s 1.14. Fixes #361

view details

Su Wang

commit sha 39db55af04b7558fc2fbbcd90da7de770a9603cf

supports policy only mode.

view details

push time in 2 months

push eventsuwang48404/antrea

Su Wang

commit sha 53a10927d59dd945fb997e5210a64a672557bcda

supports policy only mode.

view details

push time in 2 months

pull request commentvmware-tanzu/antrea

Antrea supports pass-through/policy-only mode

/test-all

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Antrea supports pass-through/policy-only mode

+# Deploying Antrea in AWS EKS++This document describes steps to deploy Antrea in NetworkPolicy only mode to an AWS EKS cluster.++Assuming you already have an EKS cluster, and have KUBECONFIG environment variable pointing to<br>+the kubeconfig file of that cluster.++Based on EKS worker Node MTU size and Kubernetes service cluster IP range, adjust+**defaultMTU** and **serviceCIDR** values of antrea-agent.conf in ./build/yamls/antrea-eks.yml+ accordingly, and apply+./build/yamls/antrea-eks.yml to the EKS cluster.++```bash+kubectl apply -f ./build/yamls/antrea-eks.yaml +```+Now Antrea should be plugged into the EKS CNI and is ready to enforces NetworkPolicy.<br> 

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Antrea supports pass-through/policy-only mode

+# Deploying Antrea in AWS EKS++This document describes steps to deploy Antrea in NetworkPolicy only mode to an AWS EKS cluster.++Assuming you already have an EKS cluster, and have KUBECONFIG environment variable pointing to+the kubeconfig file of that cluster.++To deploy the latest version of Antrea (built from the master branch) to EKS, get the Antrea EKS+deployment yaml at:+```+https://raw.githubusercontent.com/vmware-tanzu/antrea/master/build/yamls/antrea-eks.yml+```+Based on EKS worker Node MTU size and Kubernetes service cluster IP range, adjust+``defaultMTU`` and ``serviceCIDR`` values of antrea-agent.conf in antrea-eks.yml+ accordingly, and apply antrea-eks.yml to the EKS cluster.++```bash+kubectl apply -f antrea-eks.yaml +```+Now Antrea should be plugged into the EKS CNI and is ready to enforce NetworkPolicy.++### Caveats++Some Pod may already be installed before Antrea deployment. Antrea cannot enforce NetworkPolicy+on these pre-installed Pod. This may be remedied by restarting the Pod. For example,

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Antrea supports pass-through/policy-only mode

+# Deploying Antrea in AWS EKS++This document describes steps to deploy Antrea in NetworkPolicy only mode to an AWS EKS cluster.++Assuming you already have an EKS cluster, and have KUBECONFIG environment variable pointing to

done

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Antrea supports pass-through/policy-only mode

+# Deploying Antrea in AWS EKS++This document describes steps to deploy Antrea in NetworkPolicy only mode to an AWS EKS cluster.++Assuming you already have an EKS cluster, and have KUBECONFIG environment variable pointing to<br>

done.

suwang48404

comment created time in 2 months

push eventsuwang48404/antrea

Su Wang

commit sha f25fa97b33ad20c6c00a3e78f4db9721d997ac87

supports policy only mode.

view details

push time in 2 months

Pull request review commentvmware-tanzu/antrea

Antrea supports pass-through/policy-only mode

+# Deploying Antrea in AWS EKS++This document describes steps to deploy Antrea in NetworkPolicy only mode to an AWS EKS cluster.++Assuming you already have an EKS cluster, and have KUBECONFIG environment variable pointing to<br>+the kubeconfig file of that cluster.++Based on EKS worker Node MTU size and Kubernetes service cluster IP range, adjust+**defaultMTU** and **serviceCIDR** values of antrea-agent.conf in ./build/yamls/antrea-eks.yml+ accordingly, and apply+./build/yamls/antrea-eks.yml to the EKS cluster.++```bash+kubectl apply -f ./build/yamls/antrea-eks.yaml +```+Now Antrea should be plugged into the EKS CNI and is ready to enforces NetworkPolicy.<br> ++### Caveats++Some Pod may already be installed before Antrea deployment, Antrea cannot enforce NetworkPolicy<br>

done

suwang48404

comment created time in 2 months

push eventsuwang48404/antrea

Su Wang

commit sha 46d55b6cc668dd5b76bfe65c6688f9cd1dd7e2d7

supports policy only mode.

view details

push time in 2 months

Pull request review commentvmware-tanzu/antrea

Antrea supports pass-through/policy-only mode

+# Deploying Antrea in AWS EKS++This document describes steps to deploy Antrea in NetworkPolicy only mode to an AWS EKS cluster.++Assuming you already have an EKS cluster, and have KUBECONFIG environment variable pointing to+the kubeconfig file of that cluster.++To deploy the latest version of Antrea (built from the master branch) to EKS, get the Antrea EKS+deployment yaml at:+```+https://raw.githubusercontent.com/vmware-tanzu/antrea/master/build/yamls/antrea-eks.yml

we just need to check in build/yamls/build/antrea-eks.yml, right? Do I miss anything? I suspect link does not work because it is not checked in yet. I use the same link u have inhttps://github.com/vmware-tanzu/antrea/blob/master/docs/ipsec-tunnel.md for IPSec i.e

https://raw.githubusercontent.com/vmware-tanzu/antrea/master/build/yamls/antrea-ipsec.yml

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Antrea supports pass-through/policy-only mode

+# Deploying Antrea in AWS EKS++This document describes steps to deploy Antrea in NetworkPolicy only mode to an AWS EKS cluster.++Assuming you already have an EKS cluster, and have KUBECONFIG environment variable pointing to<br>+the kubeconfig file of that cluster.++Based on EKS worker Node MTU size and Kubernetes service cluster IP range, adjust+**defaultMTU** and **serviceCIDR** values of antrea-agent.conf in ./build/yamls/antrea-eks.yml+ accordingly, and apply+./build/yamls/antrea-eks.yml to the EKS cluster.++```bash+kubectl apply -f ./build/yamls/antrea-eks.yaml +```+Now Antrea should be plugged into the EKS CNI and is ready to enforces NetworkPolicy.<br> ++### Caveats++Some Pod may already be installed before Antrea deployment, Antrea cannot enforce NetworkPolicy<br>

Ok. I think I misunderstand the comments. Break into two sentences, "Some Pod may already be installed before Antrea deployment. Antrea cannot enforce NetworkPolicy. .... "

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Antrea supports pass-through/policy-only mode

+# Deploying Antrea in AWS EKS++This document describes steps to deploy Antrea in NetworkPolicy only mode to an AWS EKS cluster.++Assuming you already have an EKS cluster, and have KUBECONFIG environment variable pointing to+the kubeconfig file of that cluster.++To deploy the latest version of Antrea (built from the master branch) to EKS, get the Antrea EKS+deployment yaml at:+```+https://raw.githubusercontent.com/vmware-tanzu/antrea/master/build/yamls/antrea-eks.yml+```+Based on EKS worker Node MTU size and Kubernetes service cluster IP range, adjust+``defaultMTU`` and ``serviceCIDR`` values of antrea-agent.conf in antrea-eks.yml+ accordingly, and apply antrea-eks.yml to the EKS cluster.++```bash+kubectl apply -f antrea-eks.yaml +```+Now Antrea should be plugged into the EKS CNI and is ready to enforces NetworkPolicy.++### Caveats++Some Pod may already be installed before Antrea deployment. Antrea cannot enforce NetworkPolicy+on these pre-installed Pod. This may be remedied by restart the Pod, i.e.

done.

suwang48404

comment created time in 2 months

Pull request review commentvmware-tanzu/antrea

Antrea supports pass-through/policy-only mode

+# Deploying Antrea in AWS EKS++This document describes steps to deploy Antrea in NetworkPolicy only mode to an AWS EKS cluster.++Assuming you already have an EKS cluster, and have KUBECONFIG environment variable pointing to+the kubeconfig file of that cluster.++To deploy the latest version of Antrea (built from the master branch) to EKS, get the Antrea EKS+deployment yaml at:+```+https://raw.githubusercontent.com/vmware-tanzu/antrea/master/build/yamls/antrea-eks.yml+```+Based on EKS worker Node MTU size and Kubernetes service cluster IP range, adjust+``defaultMTU`` and ``serviceCIDR`` values of antrea-agent.conf in antrea-eks.yml+ accordingly, and apply antrea-eks.yml to the EKS cluster.++```bash+kubectl apply -f antrea-eks.yaml +```+Now Antrea should be plugged into the EKS CNI and is ready to enforces NetworkPolicy.

done.

suwang48404

comment created time in 2 months

push eventsuwang48404/antrea

Su Wang

commit sha 4951536419be673a1e169afd8d244fee1964e944

supports policy only mode.

view details

push time in 2 months

Pull request review commentvmware-tanzu/antrea

Antrea supports pass-through/policy-only mode

+apiVersion: apiextensions.k8s.io/v1beta1+kind: CustomResourceDefinition+metadata:+  labels:+    app: antrea+  name: antreaagentinfos.clusterinformation.antrea.tanzu.vmware.com+spec:+  group: clusterinformation.antrea.tanzu.vmware.com+  names:+    kind: AntreaAgentInfo+    plural: antreaagentinfos+    shortNames:+    - aai+    singular: antreaagentinfo+  scope: Cluster+  versions:+  - name: v1beta1+    served: true+    storage: true+---+apiVersion: apiextensions.k8s.io/v1beta1+kind: CustomResourceDefinition+metadata:+  labels:+    app: antrea+  name: antreacontrollerinfos.clusterinformation.antrea.tanzu.vmware.com+spec:+  group: clusterinformation.antrea.tanzu.vmware.com+  names:+    kind: AntreaControllerInfo+    plural: antreacontrollerinfos+    shortNames:+    - aci+    singular: antreacontrollerinfo+  scope: Cluster+  versions:+  - name: v1beta1+    served: true+    storage: true+---+apiVersion: v1+kind: ServiceAccount+metadata:+  labels:+    app: antrea+  name: antctl+  namespace: kube-system+---+apiVersion: v1+kind: ServiceAccount+metadata:+  labels:+    app: antrea+  name: antrea-agent+  namespace: kube-system+---+apiVersion: v1+kind: ServiceAccount+metadata:+  labels:+    app: antrea+  name: antrea-controller+  namespace: kube-system+---+apiVersion: rbac.authorization.k8s.io/v1+kind: ClusterRole+metadata:+  labels:+    app: antrea+  name: antctl+rules:+- apiGroups:+  - clusterinformation.antrea.tanzu.vmware.com+  resources:+  - antreacontrollerinfos+  verbs:+  - get+- apiGroups:+  - networking.antrea.tanzu.vmware.com+  resources:+  - networkpolicies+  - appliedtogroups+  - addressgroups+  verbs:+  - get+  - list+---+apiVersion: rbac.authorization.k8s.io/v1+kind: ClusterRole+metadata:+  labels:+    app: antrea+  name: antrea-agent+rules:+- apiGroups:+  - ""+  resources:+  - nodes+  - pods+  verbs:+  - get+  - watch+  - list+- apiGroups:+  - clusterinformation.antrea.tanzu.vmware.com+  resources:+  - antreaagentinfos+  verbs:+  - get+  - create+  - update+  - delete+- apiGroups:+  - networking.antrea.tanzu.vmware.com+  resources:+  - networkpolicies+  - appliedtogroups+  - addressgroups+  verbs:+  - get+  - watch+  - list+---+apiVersion: rbac.authorization.k8s.io/v1+kind: ClusterRole+metadata:+  labels:+    app: antrea+  name: antrea-controller+rules:+- apiGroups:+  - ""+  resources:+  - nodes+  - pods+  - namespaces+  verbs:+  - get+  - watch+  - list+- apiGroups:+  - networking.k8s.io+  resources:+  - networkpolicies+  verbs:+  - get+  - watch+  - list+- apiGroups:+  - clusterinformation.antrea.tanzu.vmware.com+  resources:+  - antreacontrollerinfos+  verbs:+  - get+  - create+  - update+  - delete+- apiGroups:+  - clusterinformation.antrea.tanzu.vmware.com+  resources:+  - antreaagentinfos+  verbs:+  - list+  - delete+- apiGroups:+  - authentication.k8s.io+  resources:+  - tokenreviews+  verbs:+  - create+- apiGroups:+  - authorization.k8s.io+  resources:+  - subjectaccessreviews+  verbs:+  - create+---+apiVersion: rbac.authorization.k8s.io/v1+kind: RoleBinding+metadata:+  labels:+    app: antrea+  name: antrea-controller-authentication-reader+  namespace: kube-system+roleRef:+  apiGroup: rbac.authorization.k8s.io+  kind: Role+  name: extension-apiserver-authentication-reader+subjects:+- kind: ServiceAccount+  name: antrea-controller+  namespace: kube-system+---+apiVersion: rbac.authorization.k8s.io/v1+kind: ClusterRoleBinding+metadata:+  labels:+    app: antrea+  name: antctl+  namespace: kube-system+roleRef:+  apiGroup: rbac.authorization.k8s.io+  kind: ClusterRole+  name: antctl+subjects:+- kind: ServiceAccount+  name: antctl+  namespace: kube-system+---+apiVersion: rbac.authorization.k8s.io/v1beta1+kind: ClusterRoleBinding+metadata:+  labels:+    app: antrea+  name: antrea-agent+roleRef:+  apiGroup: rbac.authorization.k8s.io+  kind: ClusterRole+  name: antrea-agent+subjects:+- kind: ServiceAccount+  name: antrea-agent+  namespace: kube-system+---+apiVersion: rbac.authorization.k8s.io/v1beta1+kind: ClusterRoleBinding+metadata:+  labels:+    app: antrea+  name: antrea-controller+roleRef:+  apiGroup: rbac.authorization.k8s.io+  kind: ClusterRole+  name: antrea-controller+subjects:+- kind: ServiceAccount+  name: antrea-controller+  namespace: kube-system+---+apiVersion: v1+data:+  antrea-agent.conf: |+    # Name of the OpenVSwitch bridge antrea-agent will create and use.+    # Make sure it doesn't conflict with your existing OpenVSwitch bridges.+    #ovsBridge: br-int++    # Datapath type to use for the OpenVSwitch bridge created by Antrea. Supported values are:+    # - system+    # - netdev+    # 'system' is the default value and corresponds to the kernel datapath. Use 'netdev' to run+    # OVS in userspace mode. Userspace mode requires the tun device driver to be available.+    #ovsDatapathType: system++    # Name of the interface antrea-agent will create and use for host <--> pod communication.+    # Make sure it doesn't conflict with your existing interfaces.+    #hostGateway: gw0++    # Encapsulation mode for communication between Pods across Nodes, supported values:

make sense, may be on a separate PR? I think today all build/yamls/antrea-xxx.yml comes from the same build/base/antrea-agent.conf.

suwang48404

comment created time in 2 months

pull request commentvmware-tanzu/antrea

Antrea supports pass-through/policy-only mode

/skip-all

suwang48404

comment created time in 2 months

more