This repository was archived by the owner on Nov 20, 2021. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 10
This repository was archived by the owner on Nov 20, 2021. It is now read-only.
Add crit down command/functionality #8
Copy link
Copy link
Open
Description
Just as the crit up command is used to bootstrap a new node, the crit down sub-command should be added that stops and cleans up that node. This mostly involves using the cri-api to list and stop all containers running on the node, and stopping the kubelet service. The protobuf file specifying the runtime service can be seen here and usage of the cri-api within crit can be demonstrated here:
- https://github.com/criticalstack/crit/blob/master/pkg/kubernetes/remote/remote.go
Lines 73 to 98 in d9fce24
r, err := remote.NewRuntimeServiceClient(ctx, cfg.NodeConfiguration.ContainerRuntime.CRISocket()) if err != nil { return err } // The apiserver container is only ever queried once, because the // presumption is made that it should not have to restart during // initial bootstrapping. Should this no longer be the case in the // future, this will need to be adapted to account for scenarios where // the intended flow of the apiserver allows for 1 or more container // restarts. var container *runtimeapi.Container if err := wait.PollImmediateUntil(500*time.Millisecond, func() (bool, error) { var err error container, err = r.GetContainerByName(ctx, "kube-apiserver") if err != nil { return false, nil } return true, nil }, ctx.Done()); err != nil { return err } status, err := r.GetContainerStatus(ctx, container.GetId()) if err != nil { return err }
I do not believe any files should be removed as part of crit down (not that I can think of currently), but a good litmus test for functionality should be that a user can run crit up again after running crit down and it will bootstrap a new node just as it did initially.
Metadata
Metadata
Assignees
Labels
No labels