-
Notifications
You must be signed in to change notification settings - Fork 72
OpenNESS (20.09.01) interfaceservice config for the OpenVINO samples #33
Description
Hello,
I have built a network edge deployment (OpenNESS 20.09.01) cluster between two
QEMU/KVM machines, and have successfully tested producer/consumer apps.
The edge node (worker) is provisioned with two network interfaces (eth0 and eth1) with the idea
of dedicating one to management/control and the 2nd to data. The worker connects
to the master via eth0 and eth1 is on a different subnet.
To route traffic from an external machine to the OpenVINO apps, I am unable to bridge the
2nd (data) network interface on the Edge node to the OVS as suggested in the documentation.
Specifically, a listing of interfaces on the worker node returns two kernel interfaces
(which is expected-eth0 and eth1 respectively for control and data), but the output
does not include the MAC addresses of the interfaces.
[root@controller ~]# kubectl interfaceservice get node01
Kernel interfaces:
0000:00:04.0 | | detached
0000:00:05.0 | | detached
Further, attaching the interface fails as follows:
[root@controller ~]# kubectl interfaceservice attach node01 0000:00:05.0
Error when executing command: [attach] err: rpc error: code = Unknown desc = ovs-vsctl: port name must not be empty
string
: exit status 1
Is anything misconfigured on my Edge node? Please advise.
If it helps, below is a listing of all PCI devices on the Edge/worker node
which indicates the two Ethernet controllers.
[root@node01 ~]# lspci -Dmm
0000:00:00.0 "Host bridge" "Intel Corporation" "440FX - 82441FX PMC [Natoma]" -r02 "Red Hat, Inc." "Qemu virtual ma
chine"
0000:00:01.0 "ISA bridge" "Intel Corporation" "82371AB/EB/MB PIIX4 ISA" -r03 "" ""
0000:00:01.3 "Bridge" "Intel Corporation" "82371AB/EB/MB PIIX4 ACPI" -r03 "" ""
0000:00:03.0 "Non-VGA unclassified device" "Red Hat, Inc." "Virtio SCSI" "Red Hat, Inc." "Device 0008"
0000:00:04.0 "Ethernet controller" "Red Hat, Inc." "Virtio network device" "Red Hat, Inc." "Device 0001"
0000:00:05.0 "Ethernet controller" "Red Hat, Inc." "Virtio network device" "Red Hat, Inc." "Device 0001"
0000:00:06.0 "Unclassified device [00ff]" "Red Hat, Inc." "Virtio RNG" "Red Hat, Inc." "Device 0004"
Also, below is the status of all pods as seen from the master:
kube-system coredns-66bff467f8-q5nmx 1/1 Running 1 15h
kube-system descheduler-cronjob-1605500040-n7kdq 0/1 Completed 0 13h
kube-system descheduler-cronjob-1605500160-9jwbc 0/1 Completed 0 13h
kube-system descheduler-cronjob-1605500280-c2bkc 0/1 Completed 0 13h
kube-system etcd-controller 1/1 Running 1 15h
kube-system kube-apiserver-controller 1/1 Running 1 15h
kube-system kube-controller-manager-controller 1/1 Running 1 15h
kube-system kube-multus-ds-amd64-fl9hr 1/1 Running 1 15h
kube-system kube-multus-ds-amd64-xdxq5 1/1 Running 1 13h
kube-system kube-ovn-cni-f9bbd 1/1 Running 2 15h
kube-system kube-ovn-cni-lcbh5 1/1 Running 1 14h
kube-system kube-ovn-controller-75775847c8-8njqv 1/1 Running 1 15h
kube-system kube-ovn-controller-75775847c8-c7dlf 1/1 Running 1 15h
kube-system kube-proxy-hhv5b 1/1 Running 1 15h
kube-system kube-proxy-rwr8m 1/1 Running 1 14h
kube-system kube-scheduler-controller 1/1 Running 2 14h
kube-system ovn-central-7585cd4b5c-6qcgf 1/1 Running 1 15h
kube-system ovs-ovn-cmg8q 1/1 Running 1 15h
kube-system ovs-ovn-jlv9n 1/1 Running 1 14h
kubevirt virt-api-f94f8b959-tsxkt 1/1 Running 1 13h
kubevirt virt-api-f94f8b959-vwx2v 1/1 Running 1 13h
kubevirt virt-controller-64766f7cbf-hs6vq 1/1 Running 1 13h
kubevirt virt-controller-64766f7cbf-sjf4n 1/1 Running 1 13h
kubevirt virt-handler-hqqgw 1/1 Running 1 13h
kubevirt virt-operator-79c97797-k5k9x 1/1 Running 1 15h
kubevirt virt-operator-79c97797-xf7pv 1/1 Running 1 15h
openness docker-registry-deployment-54d5bb5c-dzvkl 1/1 Running 1 15h
openness eaa-5c87c49c9-bvlfq 1/1 Running 1 13h
openness edgedns-gh8dt 1/1 Running 1 13h
openness interfaceservice-dxb5n 1/1 Running 1 13h
openness nfd-release-node-feature-discovery-master-6cf7cf5f69-m9gbw 1/1 Running 1 14h
openness nfd-release-node-feature-discovery-worker-r6gxx 1/1 Running 1 14h
telemetry cadvisor-j9lcm 2/2 Running 2 14h
telemetry collectd-nd2r5 2/2 Running 2 14h
telemetry custom-metrics-apiserver-54699b845f-h7td4 1/1 Running 1 14h
telemetry grafana-6b79c984b-vjhx7 2/2 Running 2 14h
telemetry otel-collector-7d5b75bbdf-wpnmv 2/2 Running 2 14h
telemetry prometheus-node-exporter-njspv 1/1 Running 3 14h
telemetry prometheus-server-776b5f44f-v2fgh 3/3 Running 3 14h
telemetry telemetry-aware-scheduling-68467c4ccd-q2q7h 2/2 Running 2 14h
telemetry telemetry-collector-certs-dklhx 0/1 Completed 0 14h
telemetry telemetry-node-certs-xh9bv 1/1 Running 1 14h