以前、Falcoを試したり、EKSでFalcoを試したりしたが、再びEKSでFalcoを試すメモ。
コンポーネント | バージョン |
---|---|
EKS | 1.19 |
プラットフォームバージョン | eks.5 |
Falco | 0.29.1 |
Falcoチャート | 1.15.3 |
クラスターの準備
1.19でクラスターを作成する。
cat << EOF > cluster.yaml apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: falco region: ap-northeast-1 version: "1.19" vpc: cidr: "10.2.0.0/16" availabilityZones: - ap-northeast-1a - ap-northeast-1c managedNodeGroups: - name: managed-ng-1 minSize: 2 maxSize: 2 desiredCapacity: 2 privateNetworking: true cloudWatch: clusterLogging: enableTypes: ["*"] iam: withOIDC: true EOF
eksctl create cluster -f cluster.yaml
サンプルアプリのデプロイ
サンプルのNginx Deploymentを作成する。
cat << EOF > deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 - arm64 containers: - name: nginx image: nginx:1.19.2 ports: - containerPort: 80 EOF
$ kubectl apply -f deployment.yaml deployment.apps/nginx created
Deploymentを確認する。
$ kubectl get deployments --all-namespaces NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE default nginx 3/3 3 3 13s kube-system coredns 2/2 2 2 4h17m
Fluent Bitのデプロイ
以下のリポジトリではなく、
今回はこちらの手順を使う。
amazon-cloudwatch Namespaceを作成する。
kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/cloudwatch-namespace.yaml
ConfigMapを作成する。
ClusterName=falco RegionName=ap-northeast-1 FluentBitHttpPort='2020' FluentBitReadFromHead='Off' [[ ${FluentBitReadFromHead} = 'On' ]] && FluentBitReadFromTail='Off'|| FluentBitReadFromTail='On' [[ -z ${FluentBitHttpPort} ]] && FluentBitHttpServer='Off' || FluentBitHttpServer='On' kubectl create configmap fluent-bit-cluster-info \ --from-literal=cluster.name=${ClusterName} \ --from-literal=http.server=${FluentBitHttpServer} \ --from-literal=http.port=${FluentBitHttpPort} \ --from-literal=read.head=${FluentBitReadFromHead} \ --from-literal=read.tail=${FluentBitReadFromTail} \ --from-literal=logs.region=${RegionName} -n amazon-cloudwatch
Fluent Bitをデプロイする。
kubectl apply -f https://raw.githubusercontent.com/aws-samples/amazon-cloudwatch-container-insights/latest/k8s-deployment-manifest-templates/deployment-mode/daemonset/container-insights-monitoring/fluent-bit/fluent-bit.yaml
IRSAでCloudWatchAgentServerPolicy
ポリシーをアタッチする。
eksctl create iamserviceaccount \ --name fluent-bit \ --namespace amazon-cloudwatch \ --cluster falco \ --attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy \ --override-existing-serviceaccounts \ --approve
Fluent Bitの設定を確認する。
$ k -n amazon-cloudwatch get cm fluent-bit-config -o yaml | k neat apiVersion: v1 data: application-log.conf: | [INPUT] Name tail Tag application.* Exclude_Path /var/log/containers/cloudwatch-agent*, /var/log/containers/fluent-bit*, /var/log/containers/aws-node*, /var/log/containers/kube-proxy* Path /var/log/containers/*.log Docker_Mode On Docker_Mode_Flush 5 Docker_Mode_Parser container_firstline Parser docker DB /var/fluent-bit/state/flb_container.db Mem_Buf_Limit 50MB Skip_Long_Lines On Refresh_Interval 10 Rotate_Wait 30 storage.type filesystem Read_from_Head ${READ_FROM_HEAD} [INPUT] Name tail Tag application.* Path /var/log/containers/fluent-bit* Parser docker DB /var/fluent-bit/state/flb_log.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10 Read_from_Head ${READ_FROM_HEAD} [INPUT] Name tail Tag application.* Path /var/log/containers/cloudwatch-agent* Docker_Mode On Docker_Mode_Flush 5 Docker_Mode_Parser cwagent_firstline Parser docker DB /var/fluent-bit/state/flb_cwagent.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10 Read_from_Head ${READ_FROM_HEAD} [FILTER] Name kubernetes Match application.* Kube_URL https://kubernetes.default.svc:443 Kube_Tag_Prefix application.var.log.containers. Merge_Log On Merge_Log_Key log_processed K8S-Logging.Parser On K8S-Logging.Exclude Off Labels Off Annotations Off [OUTPUT] Name cloudwatch_logs Match application.* region ${AWS_REGION} log_group_name /aws/containerinsights/${CLUSTER_NAME}/application log_stream_prefix ${HOST_NAME}- auto_create_group true extra_user_agent container-insights dataplane-log.conf: | [INPUT] Name systemd Tag dataplane.systemd.* Systemd_Filter _SYSTEMD_UNIT=docker.service Systemd_Filter _SYSTEMD_UNIT=kubelet.service DB /var/fluent-bit/state/systemd.db Path /var/log/journal Read_From_Tail ${READ_FROM_TAIL} [INPUT] Name tail Tag dataplane.tail.* Path /var/log/containers/aws-node*, /var/log/containers/kube-proxy* Docker_Mode On Docker_Mode_Flush 5 Docker_Mode_Parser container_firstline Parser docker DB /var/fluent-bit/state/flb_dataplane_tail.db Mem_Buf_Limit 50MB Skip_Long_Lines On Refresh_Interval 10 Rotate_Wait 30 storage.type filesystem Read_from_Head ${READ_FROM_HEAD} [FILTER] Name modify Match dataplane.systemd.* Rename _HOSTNAME hostname Rename _SYSTEMD_UNIT systemd_unit Rename MESSAGE message Remove_regex ^((?!hostname|systemd_unit|message).)*$ [FILTER] Name aws Match dataplane.* imds_version v1 [OUTPUT] Name cloudwatch_logs Match dataplane.* region ${AWS_REGION} log_group_name /aws/containerinsights/${CLUSTER_NAME}/dataplane log_stream_prefix ${HOST_NAME}- auto_create_group true extra_user_agent container-insights fluent-bit.conf: "[SERVICE]\n Flush 5\n Log_Level info\n \ Daemon off\n Parsers_File parsers.conf\n \ HTTP_Server ${HTTP_SERVER}\n HTTP_Listen 0.0.0.0\n \ HTTP_Port ${HTTP_PORT}\n storage.path /var/fluent-bit/state/flb-storage/\n \ storage.sync normal\n storage.checksum off\n storage.backlog.mem_limit 5M\n \n@INCLUDE application-log.conf\n@INCLUDE dataplane-log.conf\n@INCLUDE host-log.conf\n" host-log.conf: | [INPUT] Name tail Tag host.dmesg Path /var/log/dmesg Parser syslog DB /var/fluent-bit/state/flb_dmesg.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10 Read_from_Head ${READ_FROM_HEAD} [INPUT] Name tail Tag host.messages Path /var/log/messages Parser syslog DB /var/fluent-bit/state/flb_messages.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10 Read_from_Head ${READ_FROM_HEAD} [INPUT] Name tail Tag host.secure Path /var/log/secure Parser syslog DB /var/fluent-bit/state/flb_secure.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10 Read_from_Head ${READ_FROM_HEAD} [FILTER] Name aws Match host.* imds_version v1 [OUTPUT] Name cloudwatch_logs Match host.* region ${AWS_REGION} log_group_name /aws/containerinsights/${CLUSTER_NAME}/host log_stream_prefix ${HOST_NAME}. auto_create_group true extra_user_agent container-insights parsers.conf: | [PARSER] Name docker Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%LZ [PARSER] Name syslog Format regex Regex ^(?<time>[^ ]* {1,2}[^ ]* [^ ]*) (?<host>[^ ]*) (?<ident>[a-zA-Z0-9_\/\.\-]*)(?:\[(?<pid>[0-9]+)\])?(?:[^\:]*\:)? *(?<message>.*)$ Time_Key time Time_Format %b %d %H:%M:%S [PARSER] Name container_firstline Format regex Regex (?<log>(?<="log":")\S(?!\.).*?)(?<!\\)".*(?<stream>(?<="stream":").*?)".*(?<time>\d{4}-\d{1,2}-\d{1,2}T\d{2}:\d{2}:\d{2}\.\w*).*(?=}) Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%LZ [PARSER] Name cwagent_firstline Format regex Regex (?<log>(?<="log":")\d{4}[\/-]\d{1,2}[\/-]\d{1,2}[ T]\d{2}:\d{2}:\d{2}(?!\.).*?)(?<!\\)".*(?<stream>(?<="stream":").*?)".*(?<time>\d{4}-\d{1,2}-\d{1,2}T\d{2}:\d{2}:\d{2}\.\w*).*(?=}) Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%LZ kind: ConfigMap metadata: labels: k8s-app: fluent-bit name: fluent-bit-config namespace: amazon-cloudwatch
EKSのノードにこのポリシーをアタッチする。今ならIRSAを使うところだがこのままにする。
POLICY_ARN=$(aws iam list-policies | jq -r '.[][] | select(.PolicyName == "EKS-CloudWatchLogs") | .Arn') ROLE_NAME=$(aws iam list-roles | jq -r '.[][] | select( .RoleName | contains("falco") and contains("NodeInstanceRole") ) | .RoleName') aws iam attach-role-policy --role-name ${ROLE_NAME} --policy-arn ${POLICY_ARN}
Fluent Bitの設定ファイルは以下のようになっているので、リージョンだけap-northeast-1
に直しておく。
apiVersion: v1 kind: ConfigMap metadata: name: fluent-bit-config labels: app.kubernetes.io/name: fluentbit data: fluent-bit.conf: | [SERVICE] Parsers_File parsers.conf [INPUT] Name tail Tag falco.* Path /var/log/containers/falco*.log Parser falco DB /var/log/flb_falco.db Mem_Buf_Limit 5MB Skip_Long_Lines On Refresh_Interval 10 [OUTPUT] Name cloudwatch Match falco.** region ap-northeast-1 log_group_name falco log_stream_name alerts auto_create_group true parsers.conf: | [PARSER] Name falco Format json Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L Time_Keep Off # Command | Decoder | Field | Optional Action # =============|==================|================= Decode_Field_As json log
ログがとれていることをマネジメントコンソールで確認する。
Falcoのデプロイ
Helm
FalcoのHelmチャートリポジトリを追加する。
helm repo add falcosecurity https://falcosecurity.github.io/charts helm repo update
リポジトリを確認する。
$ helm repo list NAME URL (省略) falcosecurity https://falcosecurity.github.io/charts
リポジトリをクローンする。
git clone https://github.com/falcosecurity/charts
falcoチャートのディレクトリのrulesフォルダにルールファイルが格納されている。
チャートを確認する。
$ helm search repo falco NAME CHART VERSION APP VERSION DESCRIPTION falcosecurity/falco 1.15.3 0.29.1 Falco falcosecurity/falco-exporter 0.5.1 0.5.0 Prometheus Metrics Exporter for Falco output ev... falcosecurity/falcosidekick 0.3.9 2.23.1 A simple daemon to help you with falco's outputs stable/falco 1.1.8 0.0.1 DEPRECATED - incubator/falco
デフォルトの設定を確認する。
helm inspect values falcosecurity/falco
初期化コンテナを使う方法でFalcoをインストールする。
cat << EOF > values.yaml image: repository: falcosecurity/falco-no-driver extraInitContainers: - name: driver-loader image: docker.io/falcosecurity/falco-driver-loader:0.29.1 imagePullPolicy: Always securityContext: privileged: true volumeMounts: - mountPath: /host/proc name: proc-fs readOnly: true - mountPath: /host/boot name: boot-fs readOnly: true - mountPath: /host/lib/modules name: lib-modules - mountPath: /host/usr name: usr-fs readOnly: true - mountPath: /host/etc name: etc-fs readOnly: true EOF
$ helm upgrade --install falco falcosecurity/falco -n falco --create-namespace -f values.yaml Release "falco" does not exist. Installing it now. NAME: falco LAST DEPLOYED: Mon Jul 12 17:37:37 2021 NAMESPACE: falco STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Falco agents are spinning up on each node in your cluster. After a few seconds, they are going to start monitoring your containers looking for security issues. No further action should be required. Tip: You can easily forward Falco events to Slack, Kafka, AWS Lambda and more with falcosidekick. Full list of outputs: https://github.com/falcosecurity/charts/falcosidekick. You can enable its deployment with `--set falcosidekick.enabled=true` or in your values.yaml. See: https://github.com/falcosecurity/charts/blob/master/falcosidekick/values.yaml for configuration values.
Podを確認する。
$ k -n falco get po NAME READY STATUS RESTARTS AGE falco-jcqg4 1/1 Running 0 73s falco-q5dfx 1/1 Running 0 73s
ログを見るとちゃんと検知してそう。
$ k -n falco logs -f falco-jcqg4 Mon Jul 12 09:20:36 2021: Falco version 0.29.1 (driver version 17f5df52a7d9ed6bb12d3b1768460def8439936d) Mon Jul 12 09:20:36 2021: Falco initialized with configuration file /etc/falco/falco.yaml Mon Jul 12 09:20:36 2021: Loading rules from file /etc/falco/falco_rules.yaml: Mon Jul 12 09:20:36 2021: Loading rules from file /etc/falco/falco_rules.local.yaml: Mon Jul 12 09:20:36 2021: Starting internal webserver, listening on port 8765 09:20:36.994128000: Notice Privileged container started (user=<NA> user_loginuid=0 command=container:98f37173b8a2 k8s.ns=falco k8s.pod=falco-jcqg4 container=98f37173b8a2 image=falcosecurity/falco-no-driver:0.29.1) k8s.ns=falco k8s.pod=falco-jcqg4 container=98f37173b8a2 09:20:43.432799926: Notice Unexpected connection to K8s API Server from container (command=flb-pipeline -e /fluent-bit/firehose.so -e /fluent-bit/cloudwatch.so -e /fluent-bit/kinesis.so -c /fluent-bit/etc/fluent-bit.conf k8s.ns=amazon-cloudwatch k8s.pod=fluent-bit-2hx86 container=f836f31bf693 image=amazon/aws-for-fluent-bit:2.10.0 connection=10.2.122.129:34174->172.20.0.1:443) k8s.ns=amazon-cloudwatch k8s.pod=fluent-bit-2hx86 container=f836f31bf693 09:20:43.542446397: Notice Unexpected connection to K8s API Server from container (command=flb-pipeline -e /fluent-bit/firehose.so -e /fluent-bit/cloudwatch.so -e /fluent-bit/kinesis.so -c /fluent-bit/etc/fluent-bit.conf k8s.ns=amazon-cloudwatch k8s.pod=fluent-bit-2hx86 container=f836f31bf693 image=amazon/aws-for-fluent-bit:2.10.0 connection=10.2.122.129:34176->172.20.0.1:443) k8s.ns=amazon-cloudwatch k8s.pod=fluent-bit-2hx86 container=f836f31bf693
別のターミナルでkubectl execを実行して検知されることを確認する。
$ k -n default exec -it nginx-c75788bfd-wwczv -- bash root@nginx-c75788bfd-wwczv:/# exit exit
検知された。
09:23:51.478516126: Notice A shell was spawned in a container with an attached terminal (user=<NA> user_loginuid=-1 k8s.ns=default k8s.pod=nginx-c75788bfd-wwczv container=0da1313c2a74 shell=bash parent=runc cmdline=bash terminal=34816 container_id=0da1313c2a74 image=nginx) k8s.ns=default k8s.pod=nginx-c75788bfd-wwczv container=0da1313c2a74 09:24:09.330380954: Warning Shell history had been deleted or renamed (user=<NA> user_loginuid=-1 type=openat command=bash fd.name=/root/.bash_history name=/root/.bash_history path=<NA> oldpath=<NA> k8s.ns=default k8s.pod=nginx-c75788bfd-wwczv container=0da1313c2a74) k8s.ns=default k8s.pod=nginx-c75788bfd-wwczv container=0da1313c2a74
[root@ip-10-2-115-196 ~]# lsmod | grep falco falco 647168 2
なお、チャートを削除しても、ロードされたカーネルモジュールはそのまま。ノードを再起動すればなくなる。
チャートを削除する。
helm delete falco -n falco k delete ns falco
Systemd
インストールする。
rpm --import https://falco.org/repo/falcosecurity-3672BA8F.asc curl -s -o /etc/yum.repos.d/falcosecurity.repo https://falco.org/repo/falcosecurity-rpm.repo yum -y install kernel-devel-$(uname -r) yum -y install falco
Falcoを実行する。
systemctl enable falco systemctl start falco
確認する。
[root@ip-10-2-115-196 ~]# systemctl status falco ● falco.service - Falco: Container Native Runtime Security Loaded: loaded (/usr/lib/systemd/system/falco.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2021-07-12 09:31:48 UTC; 39s ago Docs: https://falco.org/docs/ Process: 5448 ExecStartPre=/sbin/modprobe falco (code=exited, status=0/SUCCESS) Main PID: 5473 (falco) Tasks: 11 Memory: 27.6M CGroup: /system.slice/falco.service └─5473 /usr/bin/falco --pidfile=/var/run/falco.pid Jul 12 09:31:48 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Falco initialized with configuration file /etc/falco/falco.yaml Jul 12 09:31:48 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Mon Jul 12 09:31:48 2021: Falco initialized with configuration file /etc/falco/falco.yaml Jul 12 09:31:48 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Loading rules from file /etc/falco/falco_rules.yaml: Jul 12 09:31:48 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Mon Jul 12 09:31:48 2021: Loading rules from file /etc/falco/falco_rules.yaml: Jul 12 09:31:48 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Loading rules from file /etc/falco/falco_rules.local.yaml: Jul 12 09:31:48 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Mon Jul 12 09:31:48 2021: Loading rules from file /etc/falco/falco_rules.local.yaml: Jul 12 09:31:48 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Loading rules from file /etc/falco/k8s_audit_rules.yaml: Jul 12 09:31:48 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Mon Jul 12 09:31:48 2021: Loading rules from file /etc/falco/k8s_audit_rules.yaml: Jul 12 09:31:49 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Starting internal webserver, listening on port 8765 Jul 12 09:31:49 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Mon Jul 12 09:31:49 2021: Starting internal webserver, listening on port 8765 [root@ip-10-2-115-196 ~]#
ログを確認する。
[root@ip-10-2-115-196 ~]# journalctl -fu falco -- Logs begin at Sun 2021-07-11 22:45:25 UTC. -- Jul 12 09:31:48 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Falco initialized with configuration file /etc/falco/falco.yaml Jul 12 09:31:48 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Mon Jul 12 09:31:48 2021: Falco initialized with configuration file /etc/falco/falco.yaml Jul 12 09:31:48 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Loading rules from file /etc/falco/falco_rules.yaml: Jul 12 09:31:48 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Mon Jul 12 09:31:48 2021: Loading rules from file /etc/falco/falco_rules.yaml: Jul 12 09:31:48 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Loading rules from file /etc/falco/falco_rules.local.yaml: Jul 12 09:31:48 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Mon Jul 12 09:31:48 2021: Loading rules from file /etc/falco/falco_rules.local.yaml: Jul 12 09:31:48 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Loading rules from file /etc/falco/k8s_audit_rules.yaml: Jul 12 09:31:48 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Mon Jul 12 09:31:48 2021: Loading rules from file /etc/falco/k8s_audit_rules.yaml: Jul 12 09:31:49 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Starting internal webserver, listening on port 8765 Jul 12 09:31:49 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: Mon Jul 12 09:31:49 2021: Starting internal webserver, listening on port 8765
ユニット定義ファイルを確認する。
[root@ip-10-2-115-196 ~]# cat /usr/lib/systemd/system/falco.service [Unit] Description=Falco: Container Native Runtime Security Documentation=https://falco.org/docs/ [Service] Type=simple User=root ExecStartPre=/sbin/modprobe falco ExecStart=/usr/bin/falco --pidfile=/var/run/falco.pid ExecStopPost=/sbin/rmmod falco UMask=0077 TimeoutSec=30 RestartSec=15s Restart=on-failure PrivateTmp=true NoNewPrivileges=yes ProtectHome=read-only ProtectSystem=full ProtectKernelTunables=true RestrictRealtime=true RestrictAddressFamilies=~AF_PACKET [Install] WantedBy=multi-user.target
先ほどと同様にこのノードのPodにkubectl execしてみると検知された。
Jul 12 09:35:00 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: 09:35:00.109286185: Notice A shell was spawned in a container with an attached terminal (user=root user_loginuid=-1 k8s_nginx_nginx-c75788bfd-wwczv_default_cb43721e-2e33-4d7d-94a3-ada53f591889_1 (id=0da1313c2a74) shell=bash parent=runc cmdline=bash terminal=34816 container_id=0da1313c2a74 image=nginx) Jul 12 09:35:00 ip-10-2-115-196.ap-northeast-1.compute.internal falco[5473]: 09:35:00.109286185: Notice A shell was spawned in a container with an attached terminal (user=root user_loginuid=-1 k8s_nginx_nginx-c75788bfd-wwczv_default_cb43721e-2e33-4d7d-94a3-ada53f591889_1 (id=0da1313c2a74) shell=bash parent=runc cmdline=bash terminal=34816 container_id=0da1313c2a74 image=nginx)
2行なのは以下あたりの設定が重複しているからと思われる。
# Send information logs to stderr and/or syslog Note these are *not* security # notification logs! These are just Falco lifecycle (and possibly error) logs. log_stderr: true log_syslog: true syslog_output: enabled: true stdout_output: enabled: true
/var/log/messagesにも2行出た。
Jul 12 09:48:26 ip-10-2-115-196 falco: 09:48:26.733184775: Notice A shell was spawned in a container with an attached terminal (user=root user_loginuid=-1 k8s_nginx_nginx-c75788bfd-wwczv_default_cb43721e-2e33-4d7d-94a3-ada53f591889_1 (id=0da1313c2a74) shell=bash parent=runc cmdline=bash terminal=34816 container_id=0da1313c2a74 image=nginx) Jul 12 09:48:26 ip-10-2-115-196 falco: 09:48:26.733184775: Notice A shell was spawned in a container with an attached terminal (user=root user_loginuid=-1 k8s_nginx_nginx-c75788bfd-wwczv_default_cb43721e-2e33-4d7d-94a3-ada53f591889_1 (id=0da1313c2a74) shell=bash parent=runc cmdline=bash terminal=34816 container_id=0da1313c2a74 image=nginx)
systemdで動かしている場合は、
syslog_output: enabled: true
とするのがよいと思われる。
Container InsightsのFluent Bitでも収集されていた。
{ "host": "ip-10-2-115-196", "ident": "falco", "message": "09:48:26.733184775: Notice A shell was spawned in a container with an attached terminal (user=root user_loginuid=-1 k8s_nginx_nginx-c75788bfd-wwczv_default_cb43721e-2e33-4d7d-94a3-ada53f591889_1 (id=0da1313c2a74) shell=bash parent=runc cmdline=bash terminal=34816 container_id=0da1313c2a74 image=nginx)", "az": "ap-northeast-1c", "ec2_instance_id": "i-02304c16a8ae787c7" }
eksctlでマネージド型ノードグループを追加する。以下のように書けばユーザーデータでインストールできる。
cat << "EOF" > managed-ng-2.yaml apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: falco region: ap-northeast-1 managedNodeGroups: - name: managed-ng-2 minSize: 1 maxSize: 1 desiredCapacity: 1 privateNetworking: true preBootstrapCommands: - | #!/bin/bash set -o errexit set -o pipefail set -o nounset rpm --import https://falco.org/repo/falcosecurity-3672BA8F.asc curl -s -o /etc/yum.repos.d/falcosecurity.repo https://falco.org/repo/falcosecurity-rpm.repo yum -y install kernel-devel-$(uname -r) yum -y install falco sed -i -e '/syslog_output/ { N; s/enabled: true/enabled: false/ }' /etc/falco/falco.yaml systemctl enable falco systemctl start falco EOF
eksctl create nodegroup -f managed-ng-2.yaml