Libertà vo cercando, ch'è sì cara, come sa chi per lei vita rifiuta

Tag: kubernetes

#3 Sharing Friday

https://blog.cloudflare.com/harnessing-office-chaos

This page provides an in-depth look at how Cloudflare harnesses physical chaos to bolster Internet security and explores the potential of public randomness and timelock encryption in applications.

There is the story of Cloudflare’s LavaRand, a system that uses physical entropy sources like lava lamps for Internet security, has grown over four years, diversifying beyond its original single source.
Cloudflare handles millions of HTTP requests secured by TLS, which requires secure randomness.
LavaRand contributes true randomness to Cloudflare’s servers, enhancing the security of cryptographic protocols.


https://radar.cloudflare.com/security-and-attacks

Here’s you can find a very interesting public dashboard provided by CloudFlare showing a lot of stats about current cyber attacks


avelino/awesome-go: A curated list of awesome Go frameworks, libraries and software (github.com)

A curated list of awesome Go frameworks, libraries and software


https://www.anthropic.com/news/claude-3-family

ChatGPT4 has been beaten.

Introducing three new AI models – Haiku, Sonnet, and Opus – with ascending capabilities for various applications1.
Opus and Sonnet are now accessible via claude.ai and the Claude API, with Haiku coming soon.
Opus excels in benchmarks for AI systems.

All models feature improved analysis, forecasting, content creation, code generation, and multilingual conversation abilities.


kubectl trick of the week.

.bahsrc

function k_get_images_digests {
  ENV="$1";
  APP="$2"
  kubectl --context ${ENV}-aks \
          -n ${ENV}-security get pod \
          -l app.kubernetes.io/instance=${APP} \
          -o json| jq -r '.items[].status.containerStatuses[].imageID' |uniq -c
}

alias k-get-images-id=k_get_images_digests

Through this alias you can get all the image digests of a specific release filtering by its label and then filter for unique values

#1 Sharing Friday

Kubernetes

  • To quickly check for all images in all #pods from a specific release (eg: Cassandra operator):
kubectl get pods -n prod-kssandra-application -l app.kubernetes.io/created-by=cass-operator -o jsonpath="{.items[*].spec.containers[*].image}" | tr -s '[[:space:]]' '\n' |sort |uniq -c

AI

News

Bash

  • To generate strong random #password you don’t need online suspicious services but just old plain bash/WSL.
    This function leverages your filesystem folder /dev/urandom,
    the output is cryptographically secure and we then match only acceptable characters in a list and finally cut a 16 length string.

    Keep it with you as an alias in your .bashrc maybe 🙂
function getNewPsw(){   
  tr -dc 'A-Za-z0-9!"#$%&'\''()*+,-./:;<=>?@[\]^_`{|}~' </dev/urandom | head -c 16; echo 
}

From CVEs scanners to SBOM generation

Example of Software Life Cycle and Bill of Materials Assembly Line

DevOps companies have always been in a constant pursuit of making their software development process faster, efficient, and secure. In the quest for better software security, a shift is happening from using traditional vulnerability scanners to utilizing Software Bill of Materials (SBOM) generation. This article explains why devops companies are making the switch and how SBOM generation provides better security for their software.

A CVE is known to all, it’s a security flaw call
It’s a number assigned, to an exposure we’ve spied
It helps track and prevent, any cyber threats that might hide!

Vulnerability scanners are software tools that identify security flaws and vulnerabilities in the code, systems, and applications. They have been used for many years to secure software and have proven to be effective. However, the increasing complexity of software systems, the speed of software development, and the need for real-time security data have exposed the limitations of traditional vulnerability scanners.

Executive Order 14028

Executive Order 14028, signed by President Biden on January 26, 2021, aims to improve the cybersecurity of federal networks and critical infrastructure by strengthening software supply chain security. The order requires federal agencies to adopt measures to ensure the security of software throughout its entire lifecycle, from development to deployment and maintenance.

NIST consulted with the National Security Agency (NSA), Office of Management and Budget (OMB), Cybersecurity & Infrastructure Security Agency (CISA), and the Director of National Intelligence (DNI) and then defined “critical software” by June 26, 2021.  

Such guidance shall include standards, procedures, or criteria regarding providing a purchaser a Software Bill of Materials (SBOM) for each product directly or by publishing it on a public website.

Object Model

CycloneDX Object Model Swimlane
SBOM Object Model

SBOM generation is a newer approach to software security that provides a comprehensive view of the components and dependencies that make up a software system. SBOMs allow devops companies to see the full picture of their software and understand all the components, including open-source libraries and dependencies, that are used in their software development process. This information is critical for devops companies to have, as it allows them to stay on top of security vulnerabilities and take the necessary measures to keep their software secure.

The main advantage of SBOM generation over vulnerability scanners is that SBOMs provide a real-time view of software components and dependencies, while vulnerability scanners only provide information about known vulnerabilities.

One practical example of a SBOM generation tool is Trivy, an open-source vulnerability scanner for container images and runtime environments. It detects vulnerabilities in real-time and integrates with the CI/CD pipeline, making it an effective tool for devops companies.

Another example is Anchore Grype, a cloud-based SBOM generation tool that provides real-time visibility into software components and dependencies, making it easier for devops companies to stay on top of security vulnerabilities.

OWASP Dependency-Track integrations

Finally, Dependency Track is another great tool by OWASP that allows organizations to identify and reduce risk in the software supply chain.
The Open Web Application Security Project® (OWASP) is a nonprofit foundation that works to improve the security of software through community-led open-source software projects.

The main features of Dependency Track include:

  1. Continuous component tracking: Dependency Track tracks changes to software components and dependencies in real-time, ensuring up-to-date security information.
  2. Vulnerability Management: The tool integrates with leading vulnerability databases, including the National Vulnerability Database (NVD), to provide accurate and up-to-date information on known vulnerabilities.
  3. Policy enforcement: Dependency Track enables organizations to create custom policies to enforce specific security requirements and automate the enforcement of these policies.
  4. Component Intelligence: The tool provides detailed information on components and dependencies, including licenses, licenses and age, and other relevant information.
  5. Integration with DevOps tools: Dependency Track integrates with popular DevOps tools, such as Jenkins and GitHub, to provide a seamless experience for devops teams.
  6. Reporting and Dashboards: Dependency Track provides customizable reports and dashboards to help organizations visualize their software components and dependencies, and identify potential security risks.

References

CKS Challenge #1

Here we’re going to see together how to solve a bugged Kubernetes architecture, thanks to a nice KodeKloud challenge, where:

  1. The persistent volume claim can’t be bound to the persistent volume
  2. Load the ‘AppArmor` profile called ‘custom-nginx’ and ensure it is enforced.
  3. The deployment alpha-xyz use an insecure image and needs to mount the ‘data volume’.
  4. ‘alpha-svc’ should be exposed on ‘port: 80’ and ‘targetPort: 80’ as ClusterIP
  5. Create a NetworkPolicy called ‘restrict-inbound’ in the ‘alpha’ namespace. Policy Type = ‘Ingress’. Inbound access only allowed from the pod called ‘middleware’ with label ‘app=middleware’. Inbound access only allowed to TCP port 80 on pods matching the policy
  6. ‘external’ pod should NOT be able to connect to ‘alpha-svc’ on port 80


1 Persistent Volume Claim

So first of all we notice the PVC is there but is pending, so let’s look into it

One of the first differences we notice is the kind of access which is ReadWriteOnce on the PVC while ReadWriteMany on the PV.

Also we want to check if that storage is present on the cluster.

Let’s fix that creating a local-storage resource:

Get the PVC YAML, delete the extra lines and modify access mode:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  finalizers:
  - kubernetes.io/pvc-protection
  name: alpha-pvc
  namespace: alpha
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: local-storage
  volumeMode: Filesystem

Now the PVC is “waiting for first consumer”.. so let’s move to deployment fixing 🙂

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims

https://kubernetes.io/docs/concepts/storage/storage-classes/#local


2 App Armor

Before fixing the deployment we need to load the App Armor profile, otherwise the pod won’t start.

To do this we move our profile inside /etc/app-arrmor.d and enable it enforced


3 DEPLOYMENT

For this exercise the permitted images are: ‘nginx:alpine’, ‘bitnami/nginx’, ‘nginx:1.13’, ‘nginx:1.17’, ‘nginx:1.16’and ‘nginx:1.14’.
We use ‘trivy‘ to find the image with the least number of ‘CRITICAL’ vulnerabilities.

Let’s give it a look at what we have now

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: alpha-xyz
  name: alpha-xyz
  namespace: alpha
spec:
  replicas: 1
  selector:
    matchLabels:
      app: alpha-xyz
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: alpha-xyz
    spec:
      containers:
      - image: ?
        name: nginx

We can start scanning all our images to see that the most secure is the alpine version

So we can now fix the deployment in two ways

  • put nginx:alpine image
  • add alpha-pvc as a volume named ‘data-volume’
  • insert the annotation for the app-armor profile created before
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: alpha-xyz
  name: alpha-xyz
  namespace: alpha
spec:
  replicas: 1
  selector:
    matchLabels:
      app: alpha-xyz
  strategy: {}
  template:
    metadata:
      labels:
        app: alpha-xyz
      annotations:
        container.apparmor.security.beta.kubernetes.io/nginx: localhost/custom-nginx
    spec:
      containers:
      - image: nginx:alpine
        name: nginx
        volumeMounts:
        - name: data-volume
          mountPath: /usr/share/nginx/html
      volumes:
      - name: data-volume
        persistentVolumeClaim:
          claimName: alpha-pvc
---

4 SERVICE

We can be fast on this with one line

kubectl expose deployment alpha-xyz --type=ClusterIP --name=alpha-svc --namespace=alpha --port=80 --target-port=80

5 NETWORK POLICY

Here we want to apply

  • over pods matching ‘alpha-xyz’ label
  • only for incoming (ingress) traffic
  • restrict it from pods labelled as ‘middleware’
  • over port 80
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-inbound
  namespace: alpha
spec:
  podSelector:
    matchLabels:
      app: alpha-xyz
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: middleware
      ports:
        - protocol: TCP
          port: 80
        

We can test now the route is closed between the external pod and the alpha-xyz

Done!


REFERENCES:

Connect to an external service on a different AKS cluster through private network

My goal is to call a service on an AKS cluster (aks1/US) from a pod on a second AKS cluster (aks2/EU).
These clusters will be on different regions and should communicate over a private network.

For the cluster networking I’m using the Azure CNI plugin.

Above you can see a schema of the two possible ending architectures. ExternalName  or ExternalIP  service on the US AKS pointing to a private EU ingress controller IP.

So, after some reading and some video listening, it seemed for me that the best option was to use an externalName service on AKS2 calling a service defined in a custom private DNS zone (ecommerce.private.eu.dev), being these two VNets peered before.

Address space for aks services:
dev-vnet  10.0.0.0/14
=======================================
dev-test1-aks   v1.22.4 - 1 node
dev-test1-vnet  11.0.0.0/16
=======================================
dev-test2-aks   v1.22.4 - 1 node
dev-test2-vnet  11.1.0.0/16 

After some trials I can get connectivity between pods networks but I was never able to reach the service network from the other cluster.

  • I don’t have any active firewall
  • I’ve peered all three networks: dev-test1-vnet, dev-test2-vnet, dev-vnet (services CIDR)
  • I’ve create a Private DNS zones private.eu.dev where I’ve put the “ecommerce” A record (10.0.129.155) that should be resolved by the externalName service

dev-test1-aks (EU cluster):

kubectl create deployment eu-ecommerce --image=k8s.gcr.io/echoserver:1.4 --port=8080 --replicas=1

kubectl expose deployment eu-ecommerce --type=ClusterIP --port=8080 --name=eu-ecommerce

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml

kubectl create ingress eu-ecommerce --class=nginx --rule=eu.ecommerce/*=eu-ecommerce:8080

This is the ingress rule:

❯ kubectl --context=dev-test1-aks get ingress eu-ecommerce-2 -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: eu-ecommerce-2
  namespace: default
spec:
  ingressClassName: nginx
  rules:
  - host: lb.private.eu.dev
    http:
      paths:
      - backend:
          service:
            name: eu-ecommerce
            port:
              number: 8080
        path: /ecommerce
        pathType: Prefix
status:
  loadBalancer:
    ingress:
    - ip: 20.xxxxx

This is one of the externalName I’ve tried on dev-test2-aks:

apiVersion: v1
kind: Service
metadata:
  name: eu-services
  namespace: default
spec:
  type: ExternalName
  externalName: ecommerce.private.eu.dev
  ports:
    - port: 8080
      protocol: TCP

These are some of my tests:

# --- Test externalName 
kubectl --context=dev-test2-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- wget -qO- http://eu-services:8080
: '
    wget: cant connect to remote host (10.0.129.155): Connection timed out
'

# --- Test connectivity AKS1 -> eu-ecommerce service
kubectl --context=dev-test1-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- wget -qO- http://eu-ecommerce:8080
kubectl --context=dev-test1-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- wget -qO- http://10.0.129.155:8080
kubectl --context=dev-test1-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- wget -qO- http://eu-ecommerce.default.svc.cluster.local:8080
kubectl --context=dev-test1-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- wget -qO- http://ecommerce.private.eu.dev:8080
# OK client_address=11.0.0.11

# --- Test connectivity AKS2 -> eu-ecommerce POD
kubectl --context=dev-test2-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- wget -qO- http://11.0.0.103:8080
#> OK


# --- Test connectivity - LB private IP
kubectl --context=dev-test1-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- wget --no-cache -qO- http://lb.private.eu.dev/ecommerce
#> OK
kubectl --context=dev-test2-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- wget --no-cache -qO- http://lb.private.eu.dev/ecommerce
#> KO  wget: can't connect to remote host (10.0.11.164): Connection timed out
#>> This is the ClusterIP! -> Think twice!


# --- Traceroute gives no informations
kubectl --context=dev-test2-aks  run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- traceroute -n -m4 ecommerce.private.eu.dev
: '
    *  *  *
    3  *  *  *
    4  *  *  *
'

# --- test2-aks can see the private dns zone and resolve the hostname
kubectl --context=dev-test2-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- nslookup ecommerce.private.eu.dev
: ' Server:    10.0.0.10
    Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
    Name:      ecommerce.private.eu.dev
    Address 1: 10.0.129.155
'

I’ve also created inbound and outbound network policies for the AKS networks:

  • on dev-aks (10.0/16) allow all incoming from 11.1/16 and 11.0/16
  • on dev-test2-aks allow any outbound

SOLUTION: Set the LB as an internal LB exposing the external IP to the private subnet

kubectl --context=dev-test1-aks patch service -n ingress-nginx ingress-nginx-controller --patch '{"metadata": {"annotations": {"service.beta.kubernetes.io/azure-load-balancer-internal": "tr
ue"}}}'

This article is also in Medium 🙂


Seen docs:

Powered by WordPress & Theme by Anders Norén