TuxErrante

Libertà vo cercando, ch'è sì cara, come sa chi per lei vita rifiuta

Resetting Your Mind

Post-Vacation Guide to Self-Improvement

-> Download here the Google doc template for the daily journaling!
[ENG] [ITA]


Introduction: Embarking on a Journey of Self-Discovery

As the golden hues of summer fade into the crisp embrace of autumn, we find ourselves at a pivotal juncture—a moment ripe for introspection and personal evolution. 🚀

This comprehensive guide invites you to embark on an odyssey of self-improvement, delving deep into the labyrinth of the human psyche.

We’ll navigate the treacherous waters of cognitive biases, scale the peaks of mindfulness, and unearth the hidden treasures of self-awareness.

By the end of this journey of self-discovery, you’ll be equipped with a cartographer’s precision in mapping out your path to a more fulfilling life.


I. The Power of Self-Reflection

A. The Illusion of Control

Imagine, if you will, a masterful puppeteer, deftly manipulating the strings of a marionette.
Now, picture yourself as both the puppeteer and the puppet—a paradox that encapsulates the human condition since the times we’ve started to think about ourselves.
We often fancy ourselves as the sole authors of our thoughts and actions, the puppeteers of our destiny. However, recent forays into the realm of cognitive science have revealed a more complex narrative.

Our minds, it seems, are less like well-oiled machines and more like eccentric artists, prone to flights of fancy and irrational brushstrokes. This tendency to deviate from the path of pure logic is what psychologists term “cognitive biases“—mental shortcuts that, while often useful, can lead us astray in a labyrinth of misperception.

Dual nature of the brain

Enter Daniel Kahneman, the Virgil to our Dante in this cognitive underworld. His seminal work, “Thinking, Fast and Slow,” illuminates the dual nature of our thought processes:

  1. System 1: The impulsive artist, flinging paint on the canvas of our consciousness with reckless abandon. Quick, intuitive, and emotionally charged, this system is the wellspring of our gut reactions and instinctive responses.
  2. System 2: The meticulous critic, scrutinizing every brushstroke with a discerning eye. Slow, deliberate, and analytical, this system is responsible for our more considered judgments and rational decision-making.

Understanding this cognitive duality is akin to gaining x-ray vision into the inner workings of our minds. It allows us to recognize when System 1 might be leading us down a primrose path of bias, and when it’s time to summon the more methodical System 2 to the fore.

However, let us not fall into the trap of oversimplification. Kahneman’s later work, “Noise,” introduces a new character to this cognitive drama—the concept of “noise” in decision-making. Picture a group of well-intentioned judges, all faced with the same case. Despite their expertise, their judgments may vary wildly due to factors as capricious as their mood or the weather. This variability, this “noise,” can lead to a cacophony of inconsistent and potentially unfair decisions.

Moreover, the scientific community, ever vigilant, has raised eyebrows at some of the experiments presented in “Thinking, Fast and Slow.” The specter of irreproducibility looms, casting shadows of doubt on the generalizability of certain findings. Yet, like a controversial masterpiece in an art gallery, Kahneman’s work continues to provoke thought and inspire further exploration of the human mind.

B. The Importance of Mindfulness

In the bustling marketplace of our minds, where thoughts jostle for attention and emotions cry out their wares, mindfulness emerges as a serene oasis. It is the practice of becoming a neutral observer to the carnival of our inner experience, paying attention to the present moment with the impartiality of a scientist and the wonder of a child.

Jon Kabat-Zinn, a modern-day alchemist in the realm of mental well-being, has distilled the ancient wisdom of mindfulness into a potent elixir for contemporary ailments. His mindfulness-based stress reduction (MBSR) techniques offer a beacon of hope in the stormy seas of modern life.

Imagine mindfulness as a skilled gardener, tenderly cultivating the soil of your consciousness. With patient attention, it can:

  1. Prune away the overgrown vines of stress and anxiety
  2. Nurture the delicate blossoms of emotional intelligence
  3. Fortify the roots of resilience against life’s tempests
  4. Create fertile ground for creativity and insight to flourish

By developing this inner garden, we create a sanctuary where we can retreat from the cacophony of automatic thoughts and knee-jerk reactions. Here, in this cultivated space of awareness, we can observe our cognitive biases with clarity and compassion, gently redirecting our mental energies towards more fruitful paths.

[Italian]

II. Practical Exercises for Self-Awareness

A. Journaling: The Cartography of the Soul

Picture yourself as an intrepid explorer, charting the vast and often mysterious terrain of your inner world. Your journal is your map, your compass, and your field notes all in one. With each entry, you’re not just recording events; you’re documenting the contours of your psyche, the climate of your emotions, and the flora and fauna of your thoughts.

Consider these journaling prompts as your expedition gear:

  1. “What unexpected discovery did I make about myself today?”
  2. “If my emotions were weather patterns, what’s the forecast for today, and why?” 🌦️
  3. “What cognitive bias might be influencing my current perspective on …?”
  4. “If I could have a conversation with my future self, what advice would they give me?”

As you traverse this inner landscape day by day, patterns will emerge like constellations in the night sky, guiding you towards deeper self-understanding.

B. Mindfulness Meditation

⏸️ As I’ve anticipated in the previous chapter, in our “modern” life, mindfulness meditation is akin to finding the pause button on reality.
It’s a practice that invites you to step off the treadmill of constant doing and into the realm of simply being.

Begin your meditation journey with the curiosity of a novice and the patience of a sage:

  1. Start small:
    Even five minutes of focused breathing can be a revolutionary act in a world that demands constant attention.
  2. Use guided resources:
    Apps like Headspace or Calm can be like having a meditation sherpa, guiding you through the initial foothills of practice.
  3. Embrace imperfection:
    Your mind will wander. That’s not failure; it’s part of the process. Each time you notice and gently return to your breath, you’re strengthening your mindfulness muscles.

Remember, the goal isn’t to achieve a blank mind—that’s as impossible as trying to empty the ocean. Instead, you’re learning to surf the waves of your thoughts rather than being tossed about by them.

C. Cognitive Behavioral Therapy

Imagine your mind as a vast computer network. CBT is like a sophisticated debugging program, helping you identify and rewrite faulty code in your mental software. It’s a collaborative process between you and a trained therapist, aimed at uncovering the hidden scripts that drive your thoughts, emotions, and behaviors.

Key CBT techniques include:

  1. Thought records: Documenting your automatic thoughts and examining the evidence for and against them.
  2. Behavioral experiments: Testing the validity of your beliefs through real-world actions.
  3. Cognitive restructuring: Learning to reframe negative thought patterns into more balanced, realistic perspectives.

While CBT can be particularly transformative for those grappling with anxiety or depression, its principles can benefit anyone seeking to optimize their mental processes. It’s like upgrading your internal operating system to run more smoothly and efficiently.

III. Overcoming Cognitive Biases

Try to fight your biases everyday. Here’s a few between the most commons.

A. Confirmation Bias

Imagine you’re an art collector with a predilection for impressionist paintings.
You’ve just acquired what you believe to be a lost Monet. Naturally, you seek out experts who specialize in impressionism, read articles about Monet’s techniques, and surround yourself with other Monet enthusiasts. But what if, in your zeal, you’ve overlooked crucial evidence that your painting is actually a skilled forgery?

This is confirmation bias in action—our tendency to seek out information that confirms our existing beliefs while ignoring or discounting contradictory evidence. It’s like wearing rose-colored glasses that filter out any hues that don’t match our preconceptions.

To combat this bias:

  1. Play devil’s advocate with yourself. For every belief you hold, challenge yourself to find three pieces of credible evidence that contradict it.
  2. Engage in structured debates where you must argue for positions you disagree with. This exercise in intellectual empathy can broaden your perspective.
  3. Cultivate a diverse network of friends and colleagues who will challenge your views respectfully but firmly.

Remember, the goal isn’t to abandon your beliefs, but to hold them with an open hand rather than a clenched fist.

B. Availability Heuristic

Picture yourself as the director of your own mental news network. The availability heuristic is like a sensationalist news anchor, giving disproportionate airtime to stories that are vivid, recent, or emotionally charged, regardless of their actual frequency or importance.

For instance, after watching a documentary about shark attacks, you might overestimate the likelihood of being bitten by a shark, even though you’re statistically more likely to be injured by a vending machine.

To counteract this bias:

  1. Become a data detective. Before making judgments about likelihood or frequency, seek out hard data and statistics from reliable sources.
  2. Practice perspective-taking. Ask yourself, “If I were from a different background or lived in a different part of the world, how might my perception of this issue change?”
  3. Keep a “surprise journal” where you record events or information that contradict your expectations. This can help calibrate your intuitive sense of probability.

C. Anchoring Bias

Imagine you’re at an auction, and the first item up for bid is a rare book. The auctioneer starts the bidding at $1000. Suddenly, that number becomes a mental anchor, influencing how you value not just that book, but potentially every item that follows.

The anchoring bias is like a stubborn boat anchor, holding our judgments in place even when we should be drifting towards a more accurate assessment. It’s particularly insidious in negotiations, where the first number mentioned can disproportionately influence the final outcome.

To weigh anchor and sail towards more accurate judgments:

  1. Before entering any situation involving numerical estimates or negotiations, decide on your own values or ranges independently.
  2. Practice generating multiple reference points. If you’re estimating the cost of a project, for example, break it down into smaller components and estimate each separately before summing them up.
  3. Seek out diverse perspectives before making a decision. Each new viewpoint can serve as a potential alternative anchor, reducing the pull of any single reference point.

If this topic intrigues you, in my notes and even more on FS blog you’ll find a very extended list.

IV. Building a Mindful Lifestyle

A. Incorporate Mindfulness into Daily Activities

Mindfulness need not be confined to the meditation cushion. In fact, the real magic happens when we infuse our daily activities with present-moment awareness. This is the alchemy of turning mundane tasks into opportunities for insight and growth.

Consider these mindful twists on everyday activities:

  1. Mindful Eating: Transform your meals into a sensory symphony. Notice the colors on your plate, inhale the aromas, savor each texture and flavor. Eating becomes not just fueling, but a celebration of the senses.
  2. Mindful Walking: Whether it’s a forest trail or a city sidewalk, walk as if you’re discovering the world for the first time. Feel the ground beneath your feet, the rhythm of your breath, the play of light and shadow around you.
  3. Mindful Listening: In conversations, practice giving your full attention to the speaker. Notice not just their words, but their tone, body language, and the emotions underlying their message. You might be surprised at how much more you hear when you’re truly listening.
  4. Mindful Creation: Whether you’re coding, cooking, or crafting, bring full awareness to the process. Notice the sensations in your body, the thoughts that arise, the subtle decisions you make at each step.

By sprinkling these moments of mindfulness throughout your day, you’re not just going through the motions of life—you’re fully inhabiting each moment.

B. Connect with Nature

In our increasingly digital world, reconnecting with nature is not just a luxury—it’s a necessity for mental and emotional wellbeing. Nature, in its infinite wisdom, has much to teach us about balance, resilience, and the art of simply being.

Consider these nature-based practices:

  1. Forest Bathing: This Japanese practice, known as “shinrin-yoku,” involves immersing yourself in the atmosphere of the forest. It’s not about hiking or exercising, but about opening your senses to the natural world around you.
  2. Earthing: Also known as grounding, this practice involves direct physical contact with the Earth’s surface. Walk barefoot on grass, sand, or soil, and feel the subtle energy exchange between your body and the earth.
  3. Sky Gazing: Lie on your back and watch the ever-changing canvas of the sky. Whether it’s the drama of storm clouds or the serenity of stars, sky gazing can shift your perspective and remind you of the vastness beyond your immediate concerns.
  4. Plant Tending: Nurturing a garden or even a single houseplant can be a profound practice in patience, care, and attunement to natural rhythms.

Research suggests that these nature connections can lower cortisol levels, boost creativity, and even enhance our capacity for empathy and cooperation. In the grand tapestry of life, we are not separate from nature—we are nature, and reconnecting with the wild can be a powerful way of coming home to ourselves.

C. Practice Gratitude: The Alchemy of Appreciation

Gratitude is like a pair of magical spectacles that, once donned, transform the mundane into the miraculous. It’s the art of recognizing the gifts in our lives, both grand and subtle, and allowing that recognition to shift our entire emotional landscape.

Here are some creative ways to start cultivating a gratitude practice:

  1. Gratitude Jar: Each day, write down one thing you’re grateful for on a small slip of paper and add it to a jar. On tough days, read through some of these notes to remind yourself of life’s blessings.
  2. Photographic Gratitude: Take a photo each day of something you’re grateful for. Over time, you’ll create a visual diary of appreciation that can be powerful to look back on.
  3. Gratitude Letters: Once a month, write a detailed letter of thanks to someone who has positively impacted your life. The act of writing deepens your appreciation, and sharing the letter can create a beautiful ripple effect of positivity.
  4. Gratitude Walks: As you walk, mentally note everything you’re grateful for that you encounter—the warmth of the sun, the smile of a stranger, the convenience of sidewalks. This practice combines the benefits of nature connection, mindfulness, and gratitude.
  5. “Three Good Things” Exercise: Each night before bed, reflect on three good things that happened during the day, no matter how small. This practice has been shown to increase happiness and decrease depressive symptoms.

Remember, gratitude isn’t about ignoring life’s challenges or forcing positivity. It’s about developing a more balanced perspective that acknowledges both the difficulties and the gifts in our lives.

Gratitude practice 2.0

BUT an effective gratitude practice goes beyond simply listing things to be grateful for and involves rewiring the nervous system.

Selecting Your Story

Begin by identifying a story that resonates deeply with you. It could be a personal anecdote, a fictional tale, or a historical event. The key is that it evokes feelings of inspiration, compassion, or awe.

Creating Your Journal Entry

Once you’ve chosen your story, dedicate a page or two in your journal to explore it in detail. Consider the following prompts:

  • Express gratitude: Write about the aspects of the story that you are grateful for. What qualities or actions inspire gratitude in you?
  • Summarize the story: Briefly recount the main events and characters.
  • Identify the emotional impact: What feelings does the story evoke in you? Are there specific moments or characters that resonate particularly strongly?
  • Connect to your own experiences: How does this story relate to your own life? Are there any parallels or lessons that you can draw from it?

Conclusion: The Never-Ending Story of Growth

As we conclude this enhanced guide, remember that personal growth is not a destination but a journey—an ongoing narrative that you are constantly writing and rewriting. Like any good story, it will have its plot twists, its moments of triumph and despair, its cast of supporting characters, and its themes that evolve over time.

The practices and insights shared here are not a prescription for perfection, but rather a set of tools to help you navigate the complex terrain of your own psyche. As you implement these strategies, approach yourself with the curiosity of a scientist, the compassion of a good friend, and the patience of a wise teacher.

Remember, too, that growth often happens in the spaces between our deliberate efforts—in the quiet moments of reflection, in the unexpected challenges that push us beyond our comfort zones, and in the connections we forge with others on their own journeys.

As you move forward, carry with you the understanding that every experience, every mistake, every moment of clarity or confusion, is an opportunity for growth. Your life is a masterpiece in progress, and you are both the artist and the art.

So, as the season changes and you embark on this next chapter, do so with a heart full of curiosity, a mind open to new possibilities, and a spirit ready for adventure. The journey


#4 Sharing Friday

https://arstechnica.com/tech-policy/2024/04/google-agrees-to-delete-private-browsing-data-to-settle-incognito-mode-lawsuit/

Google has agreed to a settlement over a class-action lawsuit regarding Chrome’s “Incognito” mode, which involves deleting billions of data records of users’ private browsing activities.
The settlement includes maintaining a change to Incognito mode that blocks third-party cookies by default, enhancing privacy for users and reducing the data Google collects.


Profile-guided optimization – The Go Programming Language (golang.org)

Go: The Complete Guide to Profiling Your Code | HackerNoon

Have you already tried Go profiling with PGO?

  • More informed compiler optimizations lead to better application performance.
  • Profiles from already-optimized binaries can be used, allowing for an iterative lifecycle of continuous improvement.
  • Go PGO is designed to be robust to changes between the profiled and current versions of the application.
  • Storing profiles in the source repository simplifies the build process and ensures reproducible builds.

https://jvns.ca/blog/2024/02/16/popular-git-config-options/#commit-verbose-true

Here’s a list of useful git options that could be very useful!


https://www.srepath.com/clearing-observability-delusions/

Observability is highlighted as the fundamental practice for all other Site Reliability Engineering (SRE) areas, essential for avoiding “flying blind.”

The article discusses common misconceptions that hinder success in observability, emphasizing the need for the right mindset and avoidance of overly complex solutions.§

The shift towards event-based Service Level Objectives (SLOs) is recommended over time-based metrics, advocating for simplicity and the importance of leadership support in SLO implementation.


https://blog.plerion.com/hacking-terraform-state-privilege-escalation/

The article discusses the security risks associated with Terraform state files in DevOps, particularly when an attacker gains the ability to edit them.

It highlights that while the Terraform state should be secure and only modifiable by the CI/CD pipeline, in reality, an attacker can exploit it to take over the entire infrastructure.
The piece emphasizes the importance of securing both the Terraform files and the state files, as well as implementing measures like state locking and permission configurations to prevent unauthorized access and modifications.
It also explores the potential for attackers to use custom providers to execute malicious code during the Terraform initialization process.


https://thehackernews.com/2024/03/microsoft-confirms-russian-hackers.html

The article details a cybersecurity breach where the Russian hacker group Midnight Blizzard accessed Microsoft’s source code and internal systems.

Microsoft confirmed the breach originated from a password spray attack on a non-production test account without multi-factor authentication.

The attack, which began in November 2023, led to the theft of undisclosed customer secrets communicated via email. Microsoft has contacted affected customers and increased security measures, but the full extent and impact of the breach remain under investigation. The incident highlights the global threat of sophisticated nation-state cyber attacks.

#3 Sharing Friday

https://blog.cloudflare.com/harnessing-office-chaos

This page provides an in-depth look at how Cloudflare harnesses physical chaos to bolster Internet security and explores the potential of public randomness and timelock encryption in applications.

There is the story of Cloudflare’s LavaRand, a system that uses physical entropy sources like lava lamps for Internet security, has grown over four years, diversifying beyond its original single source.
Cloudflare handles millions of HTTP requests secured by TLS, which requires secure randomness.
LavaRand contributes true randomness to Cloudflare’s servers, enhancing the security of cryptographic protocols.


https://radar.cloudflare.com/security-and-attacks

Here’s you can find a very interesting public dashboard provided by CloudFlare showing a lot of stats about current cyber attacks


avelino/awesome-go: A curated list of awesome Go frameworks, libraries and software (github.com)

A curated list of awesome Go frameworks, libraries and software


https://www.anthropic.com/news/claude-3-family

ChatGPT4 has been beaten.

Introducing three new AI models – Haiku, Sonnet, and Opus – with ascending capabilities for various applications1.
Opus and Sonnet are now accessible via claude.ai and the Claude API, with Haiku coming soon.
Opus excels in benchmarks for AI systems.

All models feature improved analysis, forecasting, content creation, code generation, and multilingual conversation abilities.


kubectl trick of the week.

.bahsrc

function k_get_images_digests {
  ENV="$1";
  APP="$2"
  kubectl --context ${ENV}-aks \
          -n ${ENV}-security get pod \
          -l app.kubernetes.io/instance=${APP} \
          -o json| jq -r '.items[].status.containerStatuses[].imageID' |uniq -c
}

alias k-get-images-id=k_get_images_digests

Through this alias you can get all the image digests of a specific release filtering by its label and then filter for unique values

#2 Sharing Friday

News

  • Found a new security bug in Apple M-series chipset
    The article discusses a new vulnerability in Apple’s M-series chips that allows attackers to extract secret encryption keys during cryptographic operations.
    The flaw is due to the design of the chips’ data memory-dependent prefetcher (DMP) and cannot be patched directly, potentially affecting performance.
  • Redis is changing its licensing
    Redis is adopting a dual licensing model for all future versions starting with Redis 7.4, using RSALv2 and SSPLv1 licenses, moving away from the BSD license.
    Future Redis releases will integrate advanced data types and processing engines from Redis Stack, making them freely available as part of the core Redis product.
    The new licenses restrict commercialization and managed service provision of Redis, aiming to protect Redis’ investments and its open source community.
    Redis will continue to support its community and enterprise customers, with no changes for existing Redis Enterprise customers and continued support for partner ecosystem.
  • Nobody wants to work with our best engineer
    The article discusses the challenges faced with an engineer who was technically skilled but difficult to work with.
    It highlights the importance of teamwork and collaboration in engineering, emphasizing that being right is less important than being effective and considerate.

Bash

Get your current branch fast up-to-date with master with this alias

alias git-update-branch="current_branch=$(git branch --show-current); git switch master && git pull --force && git switch $current_branch && git merge master"

Software Architecture

  • Chubby OSDI paper by Mike Burrows
    and here’s their presentation on this topic
    https://www.usenix.org/conference/srecon23emea/presentation/virji

  • Chubby is intended to provide coarse-grained locking and reliable storage for loosely-coupled distributed systems, prioritizing availability and reliability over high performance.

    It has been used to synchronize activities and agree on environmental information among clients, serving thousands concurrently.

    Similar to a distributed file system, it offers advisory locks and event notifications, aiding in tasks like leader election for services like the Google File System and Bigtable.

    The emphasis is on easy-to-understand semantics and moderate client availability, with less focus on throughput and storage capacity.

    Database Simplification: It mentions the simplification of the system through the creation of a simple database using write-ahead logging and snapshotting.
  • Introduction to Google Site Reliability Engineering slides by Salim Virji
    The presentation introduces key concepts related to SRE, emphasizing the importance of automating processes for reliability and efficiency.

    It also delves into the delicate balance between risk-taking and maintaining system stability.

    Throughout the slides, the material highlights teamwork, effective communication, and the impact of individual behavior within engineering teams. Overall, the session aims to equip students with practical insights for successful SRE practices while navigating the complexities of modern software systems.

#1 Sharing Friday

Kubernetes

  • To quickly check for all images in all #pods from a specific release (eg: Cassandra operator):
kubectl get pods -n prod-kssandra-application -l app.kubernetes.io/created-by=cass-operator -o jsonpath="{.items[*].spec.containers[*].image}" | tr -s '[[:space:]]' '\n' |sort |uniq -c

AI

News

Bash

  • To generate strong random #password you don’t need online suspicious services but just old plain bash/WSL.
    This function leverages your filesystem folder /dev/urandom,
    the output is cryptographically secure and we then match only acceptable characters in a list and finally cut a 16 length string.

    Keep it with you as an alias in your .bashrc maybe 🙂
function getNewPsw(){   
  tr -dc 'A-Za-z0-9!"#$%&'\''()*+,-./:;<=>?@[\]^_`{|}~' </dev/urandom | head -c 16; echo 
}

SAFe VS Platform Engineering

I know this is a very opinionated topic and "agile coaches" everywhere are ready to fight, so I'll try to keep it short making clear this is based just on my experience and on discussions with other engineers and managers in different companies and levels.

We’re a team of Scaled Agile SRE,
Working together to deliver quality,
Breaking down silos and communication gaps,
We’re on a mission to make sure nothing lacks.

We follow the SAFe framework to a tee,
With its ARTs and PI planning, we’re not so free,
To deliver value in every sprint,
Continuous delivery is our mint.

Chorus:
Scaled Agile and SRE,
Together we achieve,
Quality and speed,
We’re the dream team.

We prioritize work and plan ahead,
Collaborate and ensure nothing’s left unsaid,
We monitor, measure, and analyze,
Our systems to avoid any surprise.

Chorus

We take ownership and accountability,
To deliver value with reliability.

Chorus

So when you need to deliver at scale,
You know who to call and who won’t fail,
Scaled Agile SRE,
Together we’re the ultimate recipe.

ChatGPT4 & me

To not make this post too verbose I’ll try to focus only on two points that I find paramount in a SRE team living in a Scaled Agile framework (SAFe) with a Kanban style approach: capacity planning and value flow.

Capacity

What is your definition of capacity?

Most of the teams don’t ask this simple question to themselves and then struggle for months to give a better planning. Is that the sum of our hours per day? Or is it calculated based on each one capacity after removing the average amount of support, maintenance, security fixes and operations emergencies?

While learning to drive, in general but even more for a motorcycle, you’re introduced to the paradoxical concept of “expect the unexpected!

Of course, this won’t save always your life but surely it can reduce a lot the probability of you having an accident. It will because you’ll stick to some best practices tested in tens of years of driving. Like to not surpass while not seeing the exit of a turn, don’t drive too close to the previous vehicle, always consider the status of the road, the surroundings and your tires before speeding up…

The good part of computer science is that you have a lot of incidents!

But this becomes a value only if you start measuring them and then learning from them.

So we should consider our work less like artistic craftsmanship and more from a statistical point of view, going back over the closed user stories and trying to get some average completion time splitting by categories (support, emergencies, toil elimination, research…)

Nobody complains!

You have now a rough estimation of how much time is spent on variable actions and maintenance, let’s say 20 hours per week.

You know also your fixed appointments will be at least 20 min per day for the daily meeting, 1 hour per week to share issues coming from development teams and 1 hour for infrastructure refinement (open tasks evaluation, innovations to adopt or to share with the team…).

Let’s say you won’t be neither on support (answering dev teams questions and providing them new resources) nor on call (supporting operations team solving emergencies).

This will give you around 40 – 20 – 1 (dailies) – 1 (weekly) – 1 (infra) – 1 (dev team weekly) – 0.5 (weekly with your manager) = 15.5 h/w of capacity, meaning 31h of capacity for the next iteration if it lasts two weeks.

Probably  less since you know you have already other two periodical useless meeting of one hour each, so let’s round to 13 h/w ≈ 150 min/day of “uninterrupted” work.

Well… actually to not get crazy and start physically fighting my hardware I need a couple of breaks, let’s say 15 min in the morning and the same in the middle of the afternoon.

That means ≈ 120 min/day of “uninterrupted” work.

Fine, I assume I can take that user story we’ve evaluated 10h with high priority for the next week and a smaller one for the next week leaving some contingency space.

We publish this results in the PI planning and to the management, and nobody complains.

Long story short: if nobody ever complains probably you’re not involving stakeholders correctly in your PI Planning or worse you’re not involving them at all!

And that’s bad.

Why are you working on those features?

Why those features exist in first place?

If your team is decoupled from the business view, are you sure that all this effort will help something? Or do you smell re-work and failure?

We should mention also that these planning didn’t leave any space for research and creative thinking. People will start solving issues quick and dirty more and more.

Yeah, I could call Moss and Roy for a good pair programming sessions since they have already solved this issue in the last iteration but… who wants another meeting? Let’s copy paste this work around and go on for now…

How much value has my work?

To measure value, we need some kind of indicator.

There are a lot of articles on cons and pros about setting metrics for our goal even before starting. Let’s say here that you want to have a few custom indicators that proves to be a good estimation based on previous experience, they should take in consideration side effects and they should be some kind of aggregated result meaning that they shouldn’t be easily hackable (working only to improve the metrics and not the quality).

Maybe we introduce general service availability and average service response time as two service level indicators (SLI).

Then we start having management working on Value Stream Analysis to understand where this values since it was requested as a new feature by the customers before the current agile train.

They succeed to reduce periodical meetings by 50% and increase 1 to 1 communication. Now dev teams are able to solve issues by themselves thanks to better documentation and run-books etc…

Conclusions

Imagine you are trying to implement a complex application in Golang, after a while you’re still failing, so you decide to switch to Java Quarkus, that you don’t know and to mess around because you heard it is easier. After a while guess what? It still doesn’t work.

The same is for the Agile frameworks. People expect them to solve stuff auto-magically, but if we don’t put effort into changing our own behavior, into measuring our-self in order to improve (and not to give our manager micromanagement power), using the latest agile methodology will never solve our Friday afternoon issues.

Sources


Implementing continuous SBOM analysis

  1. From-cves-scanners-to-sbom-generation
  2. You are here!
  3. Dependency Track – To come!

After the deep theoretical dive of the previous article let’s try to translate all that jazz in some real example and practical use cases for implementing a continuous SBOM file generation.

Verse 1)
Grype and Syft, two brothers, so true
In the world of tech, they’re both making their due
One’s all about security, keeping us safe
The other’s about privacy, a noble crusade

(Chorus)
Together they stand, with a mission in hand
To make the digital world a better place, you understand
Grype and Syft, two brothers, so bright
Working side by side, to make the world’s tech just right

(Verse 2)
Grype’s the strong one, he’s got all the might
He’ll protect your data, day and night
Syft’s got the brains, he’s always so smart
He’ll keep your secrets, close to your heart

(Chorus)

ChatGPT

[Azure pipelines] Grype + Syft

Following there is a working example of a sample Azure pipeline comprehending two templates for having a vulnerabilities scanner job and a parallel SBOM generation.

The first job will leverage Grype, a known open-source project by Anchore, while for the second one we will use its brother/sister Syft.

At the beginning what we do is to make sure this become a continuous scanning by selecting pushes on master as a trigger action, for example to have it start after each merge on a completed pull request.

You can specify the full name of the branch (for example, master) or a wildcard (for example, releases/*). See Wildcards for information on the wildcard syntax. For more complex triggers that use exclude or batch, check the full syntax on Microsoft documentation.

In the Grype template we will

  • download the latest binary from the public project
  • set the needed permissions to read and execute the binary
  • check if there is a grype.yaml with some extra configurations
  • run the vulnerability scanner on the given image. The Grype databse will be updated before each scan
  • save the results in a file “output_grype”
  • use the output_grype to check if there are alerts that are at least High, if so we want also a Warning to be raised in our Azure DevOps web interface.

In the Syft template we will have a similar list of parameter, with the addition of the SBOM file format (json, text, cyclonedx-xml, cyclonedx-json, and much more).

After scanning our image for all its components we then publish the artifact in our pipeline, since probably we’ll want to pull this list from a SBOM analysis tool (i.e: OWASP Dependency-Track, see previous article).

Go to the code below. |🆗tested code |

Github Actions

In GitHub it would be even easier since Syft is offered as a service by an Anchore action.

By default, this action will execute a Syft scan in the workspace directory and upload a workflow artifact SBOM in SPDX format. It will also detect if being run during a GitHub release and upload the SBOM as a release asset.

A sample would be something like this:

name: Generate and Publish SBOM

on:
  push:
    branches:
      - main

env:
  DOCKER_IMAGE: <your-docker-image-name>
  ANCHORE_API_KEY: ${{ secrets.ANCHORE_API_KEY }}
  SBOM_ANALYSIS_TOOL_API_KEY: ${{ secrets.SBOM_ANALYSIS_TOOL_API_KEY }}

jobs:
  generate_sbom:
    runs-on: ubuntu-20.04

    steps:
    - name: Checkout code
      uses: actions/checkout@v2

    - name: Generate SBOM using Anchore SBOM Action
      uses: anchore/actions/generate-sbom@v1
      with:
        image_reference: ${{ env.DOCKER_IMAGE }}
        api_key: ${{ env.ANCHORE_API_KEY }}

    - name: Publish SBOM
      uses: actions/upload-artifact@v2
      with:
        name: sbom.json
        path: anchore_sbom.json

Code Samples

cve-sbom-azure-pipeline.yml


You like it You click it!

From CVEs scanners to SBOM generation

Example of Software Life Cycle and Bill of Materials Assembly Line

DevOps companies have always been in a constant pursuit of making their software development process faster, efficient, and secure. In the quest for better software security, a shift is happening from using traditional vulnerability scanners to utilizing Software Bill of Materials (SBOM) generation. This article explains why devops companies are making the switch and how SBOM generation provides better security for their software.

A CVE is known to all, it’s a security flaw call
It’s a number assigned, to an exposure we’ve spied
It helps track and prevent, any cyber threats that might hide!

Vulnerability scanners are software tools that identify security flaws and vulnerabilities in the code, systems, and applications. They have been used for many years to secure software and have proven to be effective. However, the increasing complexity of software systems, the speed of software development, and the need for real-time security data have exposed the limitations of traditional vulnerability scanners.

Executive Order 14028

Executive Order 14028, signed by President Biden on January 26, 2021, aims to improve the cybersecurity of federal networks and critical infrastructure by strengthening software supply chain security. The order requires federal agencies to adopt measures to ensure the security of software throughout its entire lifecycle, from development to deployment and maintenance.

NIST consulted with the National Security Agency (NSA), Office of Management and Budget (OMB), Cybersecurity & Infrastructure Security Agency (CISA), and the Director of National Intelligence (DNI) and then defined “critical software” by June 26, 2021.  

Such guidance shall include standards, procedures, or criteria regarding providing a purchaser a Software Bill of Materials (SBOM) for each product directly or by publishing it on a public website.

Object Model

CycloneDX Object Model Swimlane
SBOM Object Model

SBOM generation is a newer approach to software security that provides a comprehensive view of the components and dependencies that make up a software system. SBOMs allow devops companies to see the full picture of their software and understand all the components, including open-source libraries and dependencies, that are used in their software development process. This information is critical for devops companies to have, as it allows them to stay on top of security vulnerabilities and take the necessary measures to keep their software secure.

The main advantage of SBOM generation over vulnerability scanners is that SBOMs provide a real-time view of software components and dependencies, while vulnerability scanners only provide information about known vulnerabilities.

One practical example of a SBOM generation tool is Trivy, an open-source vulnerability scanner for container images and runtime environments. It detects vulnerabilities in real-time and integrates with the CI/CD pipeline, making it an effective tool for devops companies.

Another example is Anchore Grype, a cloud-based SBOM generation tool that provides real-time visibility into software components and dependencies, making it easier for devops companies to stay on top of security vulnerabilities.

OWASP Dependency-Track integrations

Finally, Dependency Track is another great tool by OWASP that allows organizations to identify and reduce risk in the software supply chain.
The Open Web Application Security Project® (OWASP) is a nonprofit foundation that works to improve the security of software through community-led open-source software projects.

The main features of Dependency Track include:

  1. Continuous component tracking: Dependency Track tracks changes to software components and dependencies in real-time, ensuring up-to-date security information.
  2. Vulnerability Management: The tool integrates with leading vulnerability databases, including the National Vulnerability Database (NVD), to provide accurate and up-to-date information on known vulnerabilities.
  3. Policy enforcement: Dependency Track enables organizations to create custom policies to enforce specific security requirements and automate the enforcement of these policies.
  4. Component Intelligence: The tool provides detailed information on components and dependencies, including licenses, licenses and age, and other relevant information.
  5. Integration with DevOps tools: Dependency Track integrates with popular DevOps tools, such as Jenkins and GitHub, to provide a seamless experience for devops teams.
  6. Reporting and Dashboards: Dependency Track provides customizable reports and dashboards to help organizations visualize their software components and dependencies, and identify potential security risks.

References

CKS Challenge #1

Here we’re going to see together how to solve a bugged Kubernetes architecture, thanks to a nice KodeKloud challenge, where:

  1. The persistent volume claim can’t be bound to the persistent volume
  2. Load the ‘AppArmor` profile called ‘custom-nginx’ and ensure it is enforced.
  3. The deployment alpha-xyz use an insecure image and needs to mount the ‘data volume’.
  4. ‘alpha-svc’ should be exposed on ‘port: 80’ and ‘targetPort: 80’ as ClusterIP
  5. Create a NetworkPolicy called ‘restrict-inbound’ in the ‘alpha’ namespace. Policy Type = ‘Ingress’. Inbound access only allowed from the pod called ‘middleware’ with label ‘app=middleware’. Inbound access only allowed to TCP port 80 on pods matching the policy
  6. ‘external’ pod should NOT be able to connect to ‘alpha-svc’ on port 80


1 Persistent Volume Claim

So first of all we notice the PVC is there but is pending, so let’s look into it

One of the first differences we notice is the kind of access which is ReadWriteOnce on the PVC while ReadWriteMany on the PV.

Also we want to check if that storage is present on the cluster.

Let’s fix that creating a local-storage resource:

Get the PVC YAML, delete the extra lines and modify access mode:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  finalizers:
  - kubernetes.io/pvc-protection
  name: alpha-pvc
  namespace: alpha
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: local-storage
  volumeMode: Filesystem

Now the PVC is “waiting for first consumer”.. so let’s move to deployment fixing 🙂

https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistentvolumeclaims

https://kubernetes.io/docs/concepts/storage/storage-classes/#local


2 App Armor

Before fixing the deployment we need to load the App Armor profile, otherwise the pod won’t start.

To do this we move our profile inside /etc/app-arrmor.d and enable it enforced


3 DEPLOYMENT

For this exercise the permitted images are: ‘nginx:alpine’, ‘bitnami/nginx’, ‘nginx:1.13’, ‘nginx:1.17’, ‘nginx:1.16’and ‘nginx:1.14’.
We use ‘trivy‘ to find the image with the least number of ‘CRITICAL’ vulnerabilities.

Let’s give it a look at what we have now

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: alpha-xyz
  name: alpha-xyz
  namespace: alpha
spec:
  replicas: 1
  selector:
    matchLabels:
      app: alpha-xyz
  strategy: {}
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: alpha-xyz
    spec:
      containers:
      - image: ?
        name: nginx

We can start scanning all our images to see that the most secure is the alpine version

So we can now fix the deployment in two ways

  • put nginx:alpine image
  • add alpha-pvc as a volume named ‘data-volume’
  • insert the annotation for the app-armor profile created before
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: alpha-xyz
  name: alpha-xyz
  namespace: alpha
spec:
  replicas: 1
  selector:
    matchLabels:
      app: alpha-xyz
  strategy: {}
  template:
    metadata:
      labels:
        app: alpha-xyz
      annotations:
        container.apparmor.security.beta.kubernetes.io/nginx: localhost/custom-nginx
    spec:
      containers:
      - image: nginx:alpine
        name: nginx
        volumeMounts:
        - name: data-volume
          mountPath: /usr/share/nginx/html
      volumes:
      - name: data-volume
        persistentVolumeClaim:
          claimName: alpha-pvc
---

4 SERVICE

We can be fast on this with one line

kubectl expose deployment alpha-xyz --type=ClusterIP --name=alpha-svc --namespace=alpha --port=80 --target-port=80

5 NETWORK POLICY

Here we want to apply

  • over pods matching ‘alpha-xyz’ label
  • only for incoming (ingress) traffic
  • restrict it from pods labelled as ‘middleware’
  • over port 80
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: restrict-inbound
  namespace: alpha
spec:
  podSelector:
    matchLabels:
      app: alpha-xyz
  policyTypes:
    - Ingress
  ingress:
    - from:
        - podSelector:
            matchLabels:
              app: middleware
      ports:
        - protocol: TCP
          port: 80
        

We can test now the route is closed between the external pod and the alpha-xyz

Done!


REFERENCES:

Connect to an external service on a different AKS cluster through private network

My goal is to call a service on an AKS cluster (aks1/US) from a pod on a second AKS cluster (aks2/EU).
These clusters will be on different regions and should communicate over a private network.

For the cluster networking I’m using the Azure CNI plugin.

Above you can see a schema of the two possible ending architectures. ExternalName  or ExternalIP  service on the US AKS pointing to a private EU ingress controller IP.

So, after some reading and some video listening, it seemed for me that the best option was to use an externalName service on AKS2 calling a service defined in a custom private DNS zone (ecommerce.private.eu.dev), being these two VNets peered before.

Address space for aks services:
dev-vnet  10.0.0.0/14
=======================================
dev-test1-aks   v1.22.4 - 1 node
dev-test1-vnet  11.0.0.0/16
=======================================
dev-test2-aks   v1.22.4 - 1 node
dev-test2-vnet  11.1.0.0/16 

After some trials I can get connectivity between pods networks but I was never able to reach the service network from the other cluster.

  • I don’t have any active firewall
  • I’ve peered all three networks: dev-test1-vnet, dev-test2-vnet, dev-vnet (services CIDR)
  • I’ve create a Private DNS zones private.eu.dev where I’ve put the “ecommerce” A record (10.0.129.155) that should be resolved by the externalName service

dev-test1-aks (EU cluster):

kubectl create deployment eu-ecommerce --image=k8s.gcr.io/echoserver:1.4 --port=8080 --replicas=1

kubectl expose deployment eu-ecommerce --type=ClusterIP --port=8080 --name=eu-ecommerce

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.1.1/deploy/static/provider/cloud/deploy.yaml

kubectl create ingress eu-ecommerce --class=nginx --rule=eu.ecommerce/*=eu-ecommerce:8080

This is the ingress rule:

❯ kubectl --context=dev-test1-aks get ingress eu-ecommerce-2 -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: eu-ecommerce-2
  namespace: default
spec:
  ingressClassName: nginx
  rules:
  - host: lb.private.eu.dev
    http:
      paths:
      - backend:
          service:
            name: eu-ecommerce
            port:
              number: 8080
        path: /ecommerce
        pathType: Prefix
status:
  loadBalancer:
    ingress:
    - ip: 20.xxxxx

This is one of the externalName I’ve tried on dev-test2-aks:

apiVersion: v1
kind: Service
metadata:
  name: eu-services
  namespace: default
spec:
  type: ExternalName
  externalName: ecommerce.private.eu.dev
  ports:
    - port: 8080
      protocol: TCP

These are some of my tests:

# --- Test externalName 
kubectl --context=dev-test2-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- wget -qO- http://eu-services:8080
: '
    wget: cant connect to remote host (10.0.129.155): Connection timed out
'

# --- Test connectivity AKS1 -> eu-ecommerce service
kubectl --context=dev-test1-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- wget -qO- http://eu-ecommerce:8080
kubectl --context=dev-test1-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- wget -qO- http://10.0.129.155:8080
kubectl --context=dev-test1-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- wget -qO- http://eu-ecommerce.default.svc.cluster.local:8080
kubectl --context=dev-test1-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- wget -qO- http://ecommerce.private.eu.dev:8080
# OK client_address=11.0.0.11

# --- Test connectivity AKS2 -> eu-ecommerce POD
kubectl --context=dev-test2-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- wget -qO- http://11.0.0.103:8080
#> OK


# --- Test connectivity - LB private IP
kubectl --context=dev-test1-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- wget --no-cache -qO- http://lb.private.eu.dev/ecommerce
#> OK
kubectl --context=dev-test2-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- wget --no-cache -qO- http://lb.private.eu.dev/ecommerce
#> KO  wget: can't connect to remote host (10.0.11.164): Connection timed out
#>> This is the ClusterIP! -> Think twice!


# --- Traceroute gives no informations
kubectl --context=dev-test2-aks  run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- traceroute -n -m4 ecommerce.private.eu.dev
: '
    *  *  *
    3  *  *  *
    4  *  *  *
'

# --- test2-aks can see the private dns zone and resolve the hostname
kubectl --context=dev-test2-aks run -it --rm --restart=Never busybox --image=gcr.io/google-containers/busybox -- nslookup ecommerce.private.eu.dev
: ' Server:    10.0.0.10
    Address 1: 10.0.0.10 kube-dns.kube-system.svc.cluster.local
    Name:      ecommerce.private.eu.dev
    Address 1: 10.0.129.155
'

I’ve also created inbound and outbound network policies for the AKS networks:

  • on dev-aks (10.0/16) allow all incoming from 11.1/16 and 11.0/16
  • on dev-test2-aks allow any outbound

SOLUTION: Set the LB as an internal LB exposing the external IP to the private subnet

kubectl --context=dev-test1-aks patch service -n ingress-nginx ingress-nginx-controller --patch '{"metadata": {"annotations": {"service.beta.kubernetes.io/azure-load-balancer-internal": "tr
ue"}}}'

This article is also in Medium 🙂


Seen docs:

Differences from Scrum, Lean and Disciplined Agile Delivery

So, your manager just finished a SCRUM course, because your enterprise company thinks it is the cutting-edge management process and now everything should be SCRUM or something very close…

Are you doing SCRUM?

How much time do you dedicate to sprint planning?

Do you have a fixed, cross-functional and autonomous team assigned to fixed length sprints full time?

Do you have a dedicated person for managing business requirements inside a backlog?

Are you taking short (5 min per person) daily stand-up meetings where everyone shares just the blocking points to the rest of the team and the Scrum master?

Are you sure you need Scrum?

Applying a complex methodology when you are in a deep technical depth situation will just make the things worse.
It is what Martin Fowler calls Flaccid Scrum.

In this case what you really need to do first is to increment your delivery fluency starting from practices like Continuous Delivery or applying pragmatic methodologies like Extreme Programming.

For many people, this situation is exacerbated by Scrum because Scrum is a process that’s centered on project management techniques and deliberately omits any technical practices, in contrast to (for example) Extreme Programming.

Martin Fowler

Fluent Delivering teams not only focus on business value, they realize that value by shipping as often as their market will accept it. This is called “shipping on the market’s cadence.”

Delivering teams are distinguished from Focusing teams not only by their ability to ship, but their ability to ship at will.

Extreme Programming (XP) pioneered many of the techniques used by delivering teams and it remains a major influence today. Nearly all fluent teams use its major innovations, such as continuous integration, test-driven
development, and “merciless” refactoring.

In recent years, the DevOps movement has extended XP’s ideas to modern cloud-based environments.

Triple constraint triangle

Comparing Scrum with Lean

So, let’s say your company’s managers already read this article and its related sources, so you’re really going fast on your CI/C processes and almost everything is versioned and monitored…

How to manage that in a big company with a lot of distributed teams?

Let’s give a fast look to Lean and then to Disciplined Agile Delivery.

SCHEDULE / TIME

Agile: fixed timeboxes and release plans are used to schedule your next activities. You need to sort your activities in order to plan your tasks by priority in a managed backlog.

Lean: the schedule can vary based on priority of the tasks exposed in a Kanban board that should be always visible by every one. No need for all the team to be full time on one task, the experts can use a divide-and-conquer approach, focusing on the most critical parts first and releasing when it is possible, following the customer Service Agreements.

SCOPE

Agile: the sprint backlog will contain the minimum scope necessary to develop the next product release

Lean: the tasks are generated by customer tickets where they specify also the urgency level.

BUDGET

Agile: ROI and Burndown charts are used to monitor budget during the project

Lean: KPI and Service Level Agreement are used to continuously check product quality and the production chain efficiency

Disciplined Agile Delivery

The Disciplined Agile Delivery (DAD) process framework is a peoplefirst,
learning-oriented hybrid agile approach to IT solution delivery. It
has a risk-value lifecycle, is goal-driven, is scalable, and is enterprise
aware.

Here the differences from Scrum, Lean and Disciplined Agile Delivery.

PEOPLE

Keep the docs at the really minimum.
The traditional approach of having formal handoffs of work products (primarily documents) between different disciplines such as requirements, analysis, design, test, and development is a very poor way to transfer knowledge that creates bottlenecks and proves in practice to be a huge source of waste of both time and money.

Teams should be cross-functional with no internal hierarchy. In Scrum for instance, there are only three Scrum team roles: Scrum Master, product owner, and team member. The primary roles described by DAD are stakeholder, team lead, team member, product owner, and architecture owner.

LEARNING

The first aspect is domain learning: how are you exploring and identifying what your stakeholders need, and
perhaps more importantly, how are you helping the team to do so?

The second aspect is process learning, which focuses on learning to improve your process at the individual, team, and enterprise levels.

The third aspect is technical learning, which focuses on understanding how to effectively work with the tools and technologies being used to craft the solution for your stakeholders.

What may not be so obvious is the move away from promoting specialization among your staff and instead fostering a move toward people with more robust skills, something called being a generalizing specialist.
Progressive organizations aggressively promote learning opportunities for their people outside their specific areas of
specialty
as well as opportunities to actually apply these new skills.

HYBRID PROCESS

DAD will take elements from the other methodologies to tailor a process that best suites an enterprise agile team:

  • prioritized backlog from Scrum
  • Kanban dashboard and limit work in progress approach from Kanban (Toyota production system)
  • Agile way to manage data a and documents
  • CI/CD, TDD, collective ownership practices from Extreme Programming and DevOps

IT SOLUTIONS OVER SOFTWARE

As IT professionals we do far more than just develop software. Yes, software is clearly important, but in addressing the needs of our stakeholders we often provide new or upgraded hardware, change the business/operational processes that stakeholders follow, and even help change the organizational structure in which our stakeholders work.

Agile was created mostly by developers and consultants, we need to focus more on business needs and company processes optimizations.

Goal-Driven Delivery Lifecycle

  • It is a delivery process extending the Scrum one, starting from the initial vision to the release in production;
  • explicit phases: Inception, Construction and Transition;
    • Inception: initiate team, schedule stakeholders meetings, requirements collection, architecture design, align with company policies, release planning, set up environment
    • Construction: CI, CD, burndown charts, TDD, refactoring, retrospective, etc..
    • Transition: delivering in production. This stage contains steps like UAT, data migration, support environment preparation, stakeholders alignment, finalize documentation and solution deployment.
  • put the phases in the right context: evaluate system preparation activities before development start and management of the system by other groups after the final release
  • explicit milestones

Conclusions

Here we have seen, shortly, the main differences from Scrum, Lean and Disciplined Agile Delivery.

DAD is a very complex process and to find out the details there is just THE book to read in the final references.

A complete enterprise delivery process is something that requires months of work by an architecture board, but the point here is how to take the right direction as soon as possible, avoiding being hypnotized by buzz-words like Scrum or thinking that we are really agile just because we do a hour stand-up meeting every morning.

Start from removing your technical depth following firmly EP and DevOps practices. Then start formalizing your process methodology and make sure every one is walking on the same path.

REFERENCES:

Supervised learning regression analysis on Google stocks

Supervised learning on Google stock analysis and predictions

Abstract

We study some tech stock price through data visualization and some financial technique, focusing on those which are intended to give a sort of reliable prevision to permit brokers have a basis on which they could decide when it is the best moment to sell or buy stocks. We first analyze a year of data about the biggest companies as Amazon, Google, Apple and Microsoft but right after that we focus on Google stocks.

Next we leave the financial tools for supervised learning analysis. These machine learning processes learn a function from an input type to an output type using data comprising examples. Furthermore we’ll talk specifically of regression supervised learning, meaning that we’re interested in inferring a real valued function whose values corresponds to the mean of a dependant variable (stock prices).

We first applied linear regression on the last 6 years of Google Trends about the word ‘google’ specifically searched in the financial news domain, versus the last 6 years Google stock prices. From now on we change our feature domain with a multivariate input, i.e. we use other stock prices (AAPL, MSFT, TWTR, AMZN) to study the accuracy of others algorithms such as a multivariate linear regression, a SVR and a Random Forest.

keywords : Finance, Stock Price Analysis, MACD, Machine Learning, Linear Regression, SVR, Random Forest, Data Visualization, Python, R

What to do next ?

  • Do you see any error? Please tell me what to correct and why;
  • Implement these algorithms on other stocks and compare results
  • Add the r sqared to the RMSE comparison
  • Try to predict future stocks prices instead of contemporary ones

Amazon, Apple, Microsoft and Google pairplot

Amazon, Apple, Microsoft and Google pairplot

Controllo automatico della connessione rimanente, per rete 3 (tre.it)

Script per il controllo automatico (dalle 8:00 alle 23:00 ogni 30 min) della connessione  rimanente con l’abbonamento 3. In caso il valore sia inferiore ad uno preimpostato (500MB) invia un email d’allerta. E’ necessario essere connessi con la 3.

L’unica versione funzionante è la selenium, che necessita Firefox.

Ma con qualche piccola modifica sono sicuro che riusciate ad utilizzare Chrome se preferite, o a reindirizzarlo sul sito del vostro provider.

Se è effettivamente utile fatemelo sapere che si può migliorare facilmente. 🙂

Data mining – 2014 homeworks solutions

Homeworks solutions (pdf + code).

  1. Homework 1 – Sol
  2. Homework 2 – Sol
  3. Homework 3 – Sol
  4. Homework 4 – Sol
  5. Homework 5 – Sol
  6. Homework 6 – Sol

Algorithm Design / Theoretical Computer Science – 2015 – Homeworks solutions

Hi

Since often in the homeworks the questions are similar to those of the previous years, here there are my solutions.
It costed us weeks of work, too much for dying forgotten in my hard disk.

Site course

  • Homework 2 :  Solution
    Themes : Set Cover, partial set cover, max cover, linear programmming (LP), integer linear programming (ILP), maximum weight matching, game theory, approximation, steiner tree, minimum spanning tree.

For the Latex version of the solutions please donate and I will send it to you with joy. 🙂


Page 1 of 6

Powered by WordPress & Theme by Anders Norén