Libertà vo cercando, ch'è sì cara, come sa chi per lei vita rifiuta

Categoria: management

The Art of Hiring in the Age of AI: A Manager’s Survival Guide

Navigating the modern hiring landscape has indeed become a complex dance, especially with the rise of AI tools that candidates use to enhance their applications (check the extra resources at the end). It’s no longer just about screening resumes; it’s about discerning genuine talent in a sea of algorithmically polished presentations.

Here’s some thought on this topic to illustrate the challenges and, more importantly, the strategies involved.

The Evolving Battlefield: AI and the Hiring Process

Let’s be realistic. Candidates are using AI. And basically that’s OK.

They’re using it to refine their resumes, tailor their cover letters, and even practice interview questions. Some might even use it during the interview itself (more on that later).
While some see this as a threat, I argue that it’s an opportunity. It forces us, as managers or senior employees, to become better interviewers, to focus on the core skills and qualities that truly matter. The old ways of hiring, where keywords and surface-level knowledge were sufficient, are no longer enough. We need to dig deeper.

Countering the “AI Advantage”: It’s Not About Detection, It’s About Discernment

Many worry about “detecting” AI-generated content.

While tools like GPTZero can be helpful, they shouldn’t be your primary focus. Think about it: even if you know a candidate used AI, does it automatically disqualify them? Perhaps they used it to overcome a language barrier or to present their skills more effectively. Our goal isn’t to police AI usage; it’s to find the best person for the job.

The strongest counter-argument to embracing this reality is the fear of being deceived. “How can we trust anything they say?” This is a valid concern, but it misses the point. We’ve always had to deal with candidates exaggerating their skills. AI just raises the stakes. The solution isn’t to try and prevent AI use, but to design our hiring process in a way that reveals true capabilities, regardless of how the candidate presents themselves initially.

Beyond the Resume: A Multi-Layered Approach

  1. The Initial Screen (Human-Powered, AI-Aware):
    The HR screen remains important, but the focus shifts. Instead of just checking boxes, HR should be trained to look for narrative flow and consistency. AI can create a perfect-looking resume, but it often struggles with the nuances of a real career journey.
    HR should ask open-ended questions about career transitions, specific projects, and challenges overcome. Look for genuine passion and articulation, not just keyword recitation.
    • Implement a “fuzzy filtering” approach where 70-80% of candidates are selected through traditional merit-based screening.
    • Randomly select an additional 20-30% from the remaining pool to introduce unpredictability and reduce systemic biases
    • Creating screening questions that require lateral thinking and cannot be easily anticipated by AI
    • Use anonymized initial screenings to reduce unconscious bias
    • Clearly communicate the screening process to candidates
  2. The Technical Gauntlet:
    Traditional coding tests are easily gamed, especially with AI assistance. Instead, focus on collaborative problem-solving.
    Have the candidate work on a realistic problem with a member of your team. Observe their thought process, their communication skills, and their ability to adapt to feedback. Don’t just look at the final solution; look at how they arrived there. Ask them to explain their choices, their reasoning, and their understanding of the trade-offs involved.
    • Develop a scoring matrix that weights factors beyond technical keywords.
      An example is given below.
    • Create “trap questions” that reveal AI-generated content by introducing subtle technical inconsistencies
    • Implement real-time problem-solving challenges that can’t be pre-prepared
  3. The Behavioral Deep Dive:
    This is where you assess the candidate’s soft skills, their cultural fit, and their potential for growth.
    Instead of generic behavioral questions, use scenario-based inquiries. Present them with a realistic workplace situation and ask them how they would handle it.
    For example: “Imagine a critical bug is discovered in production just before a major release. How do you approach the situation?” This will reveal their problem-solving skills, their communication style, and their ability to work under pressure.
  4. The “Explain It Like I’m Five” Test :
    Ask the candidate to explain a complex technical concept in simple terms. This is a great way to assess their true understanding of the subject matter. If they can’t explain it clearly and concisely, they probably don’t understand it as well as they claim.

FactorDescriptionWeight
Technical SkillsProficiency in relevant technical skills and tools. 30%
Problem-Solving AbilityAbility to approach and solve complex problems effectively. 20%
Communication SkillsClarity, conciseness, and effectiveness in verbal and written communication. 15%
Cultural FitAlignment with company values, mission, and team dynamics. 10%
Adaptability
Ability to adapt to new challenges, technologies, and environments. 10%
Emotional Intelligence
Ability to understand and manage own emotions and those of others. 10%
Learning and Growth Mindset
Demonstrated willingness and ability to learn and grow professionally.
10%

Embracing the Inevitable: AI as a Tool, Not a Threat

Instead of fearing AI, we should embrace it as a tool that can help us identify the best candidates. Think of it this way: if a candidate can use AI effectively to enhance their application, it demonstrates a certain level of technical savvy and resourcefulness. These are valuable qualities in a software engineer.

Ask them directly if they have used AI during the process and how. Or if they know about possible related security or ethical issues. Maybe the most successful prompt engineering techniques they’ve found and it will be an update opportunity for you 😉

The key is to design our hiring process in a way that goes beyond the surface level. We need to focus on the human element: critical thinking, problem-solving, communication, collaboration, and a genuine passion for technology. These are the qualities that AI can’t replicate (still), and they’re the qualities that will ultimately determine a candidate’s success.

Trust Your (educated & randomized) Gut

In the end, hiring is still a human endeavor. Don’t be afraid to trust your gut. If something feels too good to be true, it probably is. If a candidate’s answers sound too polished and rehearsed, dig deeper. Ask follow-up questions, challenge their assumptions, and see how they respond. The best candidates will be able to think on their feet, articulate their ideas clearly, and demonstrate a genuine passion for the work. Also admitting ignorance on less critical topics would be a good signal. These are the qualities that matter, regardless of how they present themselves initially.

Extra resources:

Game Theory in Engineering Management

Playing the Infinite Game

Ever wondered why some engineering teams consistently deliver amazing results while others struggle with endless firefighting?
I think the secret often lies not in individual brilliance, but in understanding the games we’re playing. Let’s see if it could make sense trying to apply game theory in engineering management.

game theory engineering management

The Management Puzzle: What Are they Really Trying to Solve?

Let’s be honest – managing engineers sometimes feels like herding very intelligent cats who speak in algorithms. The core challenges typically boil down to:

  1. How do we align individual incentives with team goals?
  2. How do we optimize for long-term success rather than short-term wins?
  3. How do we create systems that naturally promote collaboration?

If you’re nodding along, congratulations – you’ve just identified the classic elements of what game theorists call a “multiplayer dynamic game with incomplete information.”
Sounds fancy, right? Don’t worry, we’ll break it down with some real-world examples.

Dynamic, Simultaneous, Multiplayer Games with Limited Information

Let’s break this mouthful down:

  • Dynamic: The game changes as you play it (like your codebase)
  • Simultaneous: Players make moves without knowing others’ choices (like parallel development)
  • Multiplayer: Multiple stakeholders involved (developers, users, business)
  • Limited Information: Nobody has the full picture (just like real software projects!)

This is why simplistic “if-then” management strategies often fail. We’re not playing chess (perfect information) or poker (zero-sum); we’re playing something more like a massive multiplayer online game where the rules themselves evolve.

The Zero-Sum Trap

Here’s a common scenario: Your team needs to ship a critical feature. Alice is brilliant at backend work, while Bob excels at frontend. Traditional management might focus on individual metrics – lines of code, tickets closed, etc. But that’s playing a zero-sum game where Alice and Bob compete for recognition rather than collaborate for success.

Instead, let’s apply game theory principles to create a positive-sum game:

Practical Applications

1. Design Better Incentive Systems

Let’s explore some concrete system-level incentives that create positive-sum games:

The Documentation Multiplier

# Components:
# story_points_completed: Number of story points delivered in sprint (0-100)
# documentation_coverage: % of code with clear documentation (0.0-1.0)
# knowledge_sharing_index: Number of knowledge sharing activities / team size
# individual_output: Tasks completed per sprint normalized to 1.0

sprint_success = story_points_completed * documentation_coverage
team_performance = individual_output * (1 + knowledge_sharing_index)

# Example:
# A team completing 40 points with 80% documentation: 40 * 0.8 = 32
# An individual completing 10 tasks with 3 knowledge shares in a team of 5:
# 10 * (1 + 3/5) = 16

The Quality-Speed Balance

# Components:
# speed: Tasks completed per week / team capacity (0.0-1.0)
# quality: (100 - bugs_found) / 100 (0.0-1.0)
# Both metrics normalized to 1.0 for fair comparison

# Traditional approach measures them separately
performance = (speed + quality) / 2

# System thinking combines them multiplicatively
performance = speed * quality

# Example:
# Traditional: (0.9 speed + 0.5 quality) / 2 = 0.7
# System thinking: 0.9 * 0.5 = 0.45 (penalizes quality issues more)

The Learning Network Effect

# Components:
# base_productivity: Standard team velocity (story points/sprint)
# cross_trained_pairs: Number of engineer pairs who can cover each other's work
# total_possible_pairs: n*(n-1)/2 where n is team size
# technical_output: Completed features per sprint
# unique_viewpoints_contributed: Number of team members actively contributing to technical discussions

team_capability = base_productivity * (1 + cross_trained_pairs/total_possible_pairs)
innovation_score = technical_output * unique_viewpoints_contributed

# Example for a team of 5:
# Base productivity: 30 points
# Cross-trained pairs: 6 out of 10 possible pairs
# team_capability = 30 * (1 + 6/10) = 48 points
# 
# If they complete 5 features with all 5 members contributing:
# innovation_score = 5 * 5 = 25

This multiplicative approach to metrics creates several beneficial effects:

  1. Zero in any component drastically reduces overall performance
  2. Improvements in any area benefit the entire system
  3. Maximum performance requires balanced optimization across all factors

These formulas can be adjusted based on team size and organizational priorities, but the key is maintaining the multiplicative relationship between components. This ensures that no single metric can be optimized at the expense of others without harming overall performance.

2. Understanding Reputation Formation

In game theory, reputation isn’t just about being “good” or “bad” – it’s a complex dynamic system that evolves through repeated interactions. Think of it as your team’s internal cryptocurrency: every interaction either mines or burns tokens of trust.

Consider how senior engineers build reputation: they don’t just write good code; they create environments where others can write good code. This is what game theorists call “reputation externalities” – your actions don’t just affect your reputation, but influence how others build theirs.

The Trust Equation

Research in game theory suggests that trust can be modeled as:

# Components:
# credibility: Track record of making good on promises (0-1)
# reliability: Consistency of behavior over time (0-1)
# intimacy: Psychological safety created in team (0-1)
# self_orientation: Focus on self vs team (-1 to 1)

Trust = (credibility + reliability + intimacy) / (1 + self_orientation)

# Example:
# High performing manager: (0.9 + 0.8 + 0.7) / (1 + 0.2) = 2.0
# Self-centered manager: (0.9 + 0.8 + 0.7) / (1 + 0.8) = 1.33

Good managers understand that reputation isn’t built through grand gestures but through consistent behavior in small interactions. Here’s how to play this game effectively:

  1. Information Symmetry
  • Share context behind decisions
  • Admit when you don’t know something
  • Be transparent about constraints and trade-offs
  1. Repeated Games
  1. Signal Amplification
  • Public recognition of team contributions
  • Taking responsibility for failures
  • Defending team interests in organizational politics

Remember: in game theory, the most successful strategies in repeated games tend to be “nice, forgiving, and non-envious.” This translates to management as:

  • Nice: Start with trust and support
  • Forgiving: Allow for mistakes and learning
  • Non-envious: Celebrate team success over personal recognition

3. The Game Within The Game

Here’s a secret about game theory: there’s always a bigger game. That bug fix you’re working on? It’s part of a feature. That feature? Part of a product. That product? Part of a company’s vision to change the world.

Understanding this “nested games” concept is crucial for motivation. When engineers see how their daily work connects to larger outcomes, they’re more likely to make decisions that optimize for the bigger game. It’s the difference between laying bricks and building a cathedral.


But Wait… What About Reality?

Now, I hear you thinking: “This all sounds great in theory, but what about…”

The Counter-Arguments

  1. “Individual brilliance matters!” Yes, but brilliant individuals in toxic systems produce suboptimal results. Think of it like a Formula 1 car with square wheels.
  2. “We need to move fast!” Short-term thinking is exactly what game theory helps us avoid. Remember, every game theorist’s favorite game is the infinite game.
  3. “Our industry is naturally competitive!” Perfect! That means understanding game theory gives you a competitive advantage.

Bringing It All Together

The key to successful engineering management isn’t just understanding technology or people – it’s understanding the games we create and play every day. By applying game theory principles, we can build systems that:

  • Naturally align individual and team incentives
  • Promote long-term thinking
  • Turn potential competitors into collaborators

Want to Dive Deeper?

  1. “The Evolution of Trust” – Nicky Case’s interactive guide (2020)
  2. Trust beyond reason – drmaciver.com
  3. MIT OpenCourseWare: Game Theory
  4. “Winning at Game Theory with The Price is Right” – John Farrier (2023)
  5. Strategy and Game Theory for Management by IIMA

#3 Sharing Friday

https://blog.cloudflare.com/harnessing-office-chaos

This page provides an in-depth look at how Cloudflare harnesses physical chaos to bolster Internet security and explores the potential of public randomness and timelock encryption in applications.

There is the story of Cloudflare’s LavaRand, a system that uses physical entropy sources like lava lamps for Internet security, has grown over four years, diversifying beyond its original single source.
Cloudflare handles millions of HTTP requests secured by TLS, which requires secure randomness.
LavaRand contributes true randomness to Cloudflare’s servers, enhancing the security of cryptographic protocols.


https://radar.cloudflare.com/security-and-attacks

Here’s you can find a very interesting public dashboard provided by CloudFlare showing a lot of stats about current cyber attacks


avelino/awesome-go: A curated list of awesome Go frameworks, libraries and software (github.com)

A curated list of awesome Go frameworks, libraries and software


https://www.anthropic.com/news/claude-3-family

ChatGPT4 has been beaten.

Introducing three new AI models – Haiku, Sonnet, and Opus – with ascending capabilities for various applications1.
Opus and Sonnet are now accessible via claude.ai and the Claude API, with Haiku coming soon.
Opus excels in benchmarks for AI systems.

All models feature improved analysis, forecasting, content creation, code generation, and multilingual conversation abilities.


kubectl trick of the week.

.bahsrc

function k_get_images_digests {
  ENV="$1";
  APP="$2"
  kubectl --context ${ENV}-aks \
          -n ${ENV}-security get pod \
          -l app.kubernetes.io/instance=${APP} \
          -o json| jq -r '.items[].status.containerStatuses[].imageID' |uniq -c
}

alias k-get-images-id=k_get_images_digests

Through this alias you can get all the image digests of a specific release filtering by its label and then filter for unique values

SAFe VS Platform Engineering

I know this is a very opinionated topic and "agile coaches" everywhere are ready to fight, so I'll try to keep it short making clear this is based just on my experience and on discussions with other engineers and managers in different companies and levels.

We’re a team of Scaled Agile SRE,
Working together to deliver quality,
Breaking down silos and communication gaps,
We’re on a mission to make sure nothing lacks.

We follow the SAFe framework to a tee,
With its ARTs and PI planning, we’re not so free,
To deliver value in every sprint,
Continuous delivery is our mint.

Chorus:
Scaled Agile and SRE,
Together we achieve,
Quality and speed,
We’re the dream team.

We prioritize work and plan ahead,
Collaborate and ensure nothing’s left unsaid,
We monitor, measure, and analyze,
Our systems to avoid any surprise.

Chorus

We take ownership and accountability,
To deliver value with reliability.

Chorus

So when you need to deliver at scale,
You know who to call and who won’t fail,
Scaled Agile SRE,
Together we’re the ultimate recipe.

ChatGPT4 & me

To not make this post too verbose I’ll try to focus only on two points that I find paramount in a SRE team living in a Scaled Agile framework (SAFe) with a Kanban style approach: capacity planning and value flow.

Capacity

What is your definition of capacity?

Most of the teams don’t ask this simple question to themselves and then struggle for months to give a better planning. Is that the sum of our hours per day? Or is it calculated based on each one capacity after removing the average amount of support, maintenance, security fixes and operations emergencies?

While learning to drive, in general but even more for a motorcycle, you’re introduced to the paradoxical concept of “expect the unexpected!

Of course, this won’t save always your life but surely it can reduce a lot the probability of you having an accident. It will because you’ll stick to some best practices tested in tens of years of driving. Like to not surpass while not seeing the exit of a turn, don’t drive too close to the previous vehicle, always consider the status of the road, the surroundings and your tires before speeding up…

The good part of computer science is that you have a lot of incidents!

But this becomes a value only if you start measuring them and then learning from them.

So we should consider our work less like artistic craftsmanship and more from a statistical point of view, going back over the closed user stories and trying to get some average completion time splitting by categories (support, emergencies, toil elimination, research…)

Nobody complains!

You have now a rough estimation of how much time is spent on variable actions and maintenance, let’s say 20 hours per week.

You know also your fixed appointments will be at least 20 min per day for the daily meeting, 1 hour per week to share issues coming from development teams and 1 hour for infrastructure refinement (open tasks evaluation, innovations to adopt or to share with the team…).

Let’s say you won’t be neither on support (answering dev teams questions and providing them new resources) nor on call (supporting operations team solving emergencies).

This will give you around 40 – 20 – 1 (dailies) – 1 (weekly) – 1 (infra) – 1 (dev team weekly) – 0.5 (weekly with your manager) = 15.5 h/w of capacity, meaning 31h of capacity for the next iteration if it lasts two weeks.

Probably  less since you know you have already other two periodical useless meeting of one hour each, so let’s round to 13 h/w ≈ 150 min/day of “uninterrupted” work.

Well… actually to not get crazy and start physically fighting my hardware I need a couple of breaks, let’s say 15 min in the morning and the same in the middle of the afternoon.

That means ≈ 120 min/day of “uninterrupted” work.

Fine, I assume I can take that user story we’ve evaluated 10h with high priority for the next week and a smaller one for the next week leaving some contingency space.

We publish this results in the PI planning and to the management, and nobody complains.

Long story short: if nobody ever complains probably you’re not involving stakeholders correctly in your PI Planning or worse you’re not involving them at all!

And that’s bad.

Why are you working on those features?

Why those features exist in first place?

If your team is decoupled from the business view, are you sure that all this effort will help something? Or do you smell re-work and failure?

We should mention also that these planning didn’t leave any space for research and creative thinking. People will start solving issues quick and dirty more and more.

Yeah, I could call Moss and Roy for a good pair programming sessions since they have already solved this issue in the last iteration but… who wants another meeting? Let’s copy paste this work around and go on for now…

How much value has my work?

To measure value, we need some kind of indicator.

There are a lot of articles on cons and pros about setting metrics for our goal even before starting. Let’s say here that you want to have a few custom indicators that proves to be a good estimation based on previous experience, they should take in consideration side effects and they should be some kind of aggregated result meaning that they shouldn’t be easily hackable (working only to improve the metrics and not the quality).

Maybe we introduce general service availability and average service response time as two service level indicators (SLI).

Then we start having management working on Value Stream Analysis to understand where this values since it was requested as a new feature by the customers before the current agile train.

They succeed to reduce periodical meetings by 50% and increase 1 to 1 communication. Now dev teams are able to solve issues by themselves thanks to better documentation and run-books etc…

Conclusions

Imagine you are trying to implement a complex application in Golang, after a while you’re still failing, so you decide to switch to Java Quarkus, that you don’t know and to mess around because you heard it is easier. After a while guess what? It still doesn’t work.

The same is for the Agile frameworks. People expect them to solve stuff auto-magically, but if we don’t put effort into changing our own behavior, into measuring our-self in order to improve (and not to give our manager micromanagement power), using the latest agile methodology will never solve our Friday afternoon issues.

Sources


Implementing continuous SBOM analysis

  1. From-cves-scanners-to-sbom-generation
  2. You are here!
  3. Dependency Track – To come!

After the deep theoretical dive of the previous article let’s try to translate all that jazz in some real example and practical use cases for implementing a continuous SBOM file generation.

Verse 1)
Grype and Syft, two brothers, so true
In the world of tech, they’re both making their due
One’s all about security, keeping us safe
The other’s about privacy, a noble crusade

(Chorus)
Together they stand, with a mission in hand
To make the digital world a better place, you understand
Grype and Syft, two brothers, so bright
Working side by side, to make the world’s tech just right

(Verse 2)
Grype’s the strong one, he’s got all the might
He’ll protect your data, day and night
Syft’s got the brains, he’s always so smart
He’ll keep your secrets, close to your heart

(Chorus)

ChatGPT

[Azure pipelines] Grype + Syft

Following there is a working example of a sample Azure pipeline comprehending two templates for having a vulnerabilities scanner job and a parallel SBOM generation.

The first job will leverage Grype, a known open-source project by Anchore, while for the second one we will use its brother/sister Syft.

At the beginning what we do is to make sure this become a continuous scanning by selecting pushes on master as a trigger action, for example to have it start after each merge on a completed pull request.

You can specify the full name of the branch (for example, master) or a wildcard (for example, releases/*). See Wildcards for information on the wildcard syntax. For more complex triggers that use exclude or batch, check the full syntax on Microsoft documentation.

In the Grype template we will

  • download the latest binary from the public project
  • set the needed permissions to read and execute the binary
  • check if there is a grype.yaml with some extra configurations
  • run the vulnerability scanner on the given image. The Grype databse will be updated before each scan
  • save the results in a file “output_grype”
  • use the output_grype to check if there are alerts that are at least High, if so we want also a Warning to be raised in our Azure DevOps web interface.

In the Syft template we will have a similar list of parameter, with the addition of the SBOM file format (json, text, cyclonedx-xml, cyclonedx-json, and much more).

After scanning our image for all its components we then publish the artifact in our pipeline, since probably we’ll want to pull this list from a SBOM analysis tool (i.e: OWASP Dependency-Track, see previous article).

Go to the code below. |🆗tested code |

Github Actions

In GitHub it would be even easier since Syft is offered as a service by an Anchore action.

By default, this action will execute a Syft scan in the workspace directory and upload a workflow artifact SBOM in SPDX format. It will also detect if being run during a GitHub release and upload the SBOM as a release asset.

A sample would be something like this:

name: Generate and Publish SBOM

on:
  push:
    branches:
      - main

env:
  DOCKER_IMAGE: <your-docker-image-name>
  ANCHORE_API_KEY: ${{ secrets.ANCHORE_API_KEY }}
  SBOM_ANALYSIS_TOOL_API_KEY: ${{ secrets.SBOM_ANALYSIS_TOOL_API_KEY }}

jobs:
  generate_sbom:
    runs-on: ubuntu-20.04

    steps:
    - name: Checkout code
      uses: actions/checkout@v2

    - name: Generate SBOM using Anchore SBOM Action
      uses: anchore/actions/generate-sbom@v1
      with:
        image_reference: ${{ env.DOCKER_IMAGE }}
        api_key: ${{ env.ANCHORE_API_KEY }}

    - name: Publish SBOM
      uses: actions/upload-artifact@v2
      with:
        name: sbom.json
        path: anchore_sbom.json

Code Samples

cve-sbom-azure-pipeline.yml


You like it You click it!

From CVEs scanners to SBOM generation

Example of Software Life Cycle and Bill of Materials Assembly Line

DevOps companies have always been in a constant pursuit of making their software development process faster, efficient, and secure. In the quest for better software security, a shift is happening from using traditional vulnerability scanners to utilizing Software Bill of Materials (SBOM) generation. This article explains why devops companies are making the switch and how SBOM generation provides better security for their software.

A CVE is known to all, it’s a security flaw call
It’s a number assigned, to an exposure we’ve spied
It helps track and prevent, any cyber threats that might hide!

Vulnerability scanners are software tools that identify security flaws and vulnerabilities in the code, systems, and applications. They have been used for many years to secure software and have proven to be effective. However, the increasing complexity of software systems, the speed of software development, and the need for real-time security data have exposed the limitations of traditional vulnerability scanners.

Executive Order 14028

Executive Order 14028, signed by President Biden on January 26, 2021, aims to improve the cybersecurity of federal networks and critical infrastructure by strengthening software supply chain security. The order requires federal agencies to adopt measures to ensure the security of software throughout its entire lifecycle, from development to deployment and maintenance.

NIST consulted with the National Security Agency (NSA), Office of Management and Budget (OMB), Cybersecurity & Infrastructure Security Agency (CISA), and the Director of National Intelligence (DNI) and then defined “critical software” by June 26, 2021.  

Such guidance shall include standards, procedures, or criteria regarding providing a purchaser a Software Bill of Materials (SBOM) for each product directly or by publishing it on a public website.

Object Model

CycloneDX Object Model Swimlane
SBOM Object Model

SBOM generation is a newer approach to software security that provides a comprehensive view of the components and dependencies that make up a software system. SBOMs allow devops companies to see the full picture of their software and understand all the components, including open-source libraries and dependencies, that are used in their software development process. This information is critical for devops companies to have, as it allows them to stay on top of security vulnerabilities and take the necessary measures to keep their software secure.

The main advantage of SBOM generation over vulnerability scanners is that SBOMs provide a real-time view of software components and dependencies, while vulnerability scanners only provide information about known vulnerabilities.

One practical example of a SBOM generation tool is Trivy, an open-source vulnerability scanner for container images and runtime environments. It detects vulnerabilities in real-time and integrates with the CI/CD pipeline, making it an effective tool for devops companies.

Another example is Anchore Grype, a cloud-based SBOM generation tool that provides real-time visibility into software components and dependencies, making it easier for devops companies to stay on top of security vulnerabilities.

OWASP Dependency-Track integrations

Finally, Dependency Track is another great tool by OWASP that allows organizations to identify and reduce risk in the software supply chain.
The Open Web Application Security Project® (OWASP) is a nonprofit foundation that works to improve the security of software through community-led open-source software projects.

The main features of Dependency Track include:

  1. Continuous component tracking: Dependency Track tracks changes to software components and dependencies in real-time, ensuring up-to-date security information.
  2. Vulnerability Management: The tool integrates with leading vulnerability databases, including the National Vulnerability Database (NVD), to provide accurate and up-to-date information on known vulnerabilities.
  3. Policy enforcement: Dependency Track enables organizations to create custom policies to enforce specific security requirements and automate the enforcement of these policies.
  4. Component Intelligence: The tool provides detailed information on components and dependencies, including licenses, licenses and age, and other relevant information.
  5. Integration with DevOps tools: Dependency Track integrates with popular DevOps tools, such as Jenkins and GitHub, to provide a seamless experience for devops teams.
  6. Reporting and Dashboards: Dependency Track provides customizable reports and dashboards to help organizations visualize their software components and dependencies, and identify potential security risks.

References

Differences from Scrum, Lean and Disciplined Agile Delivery

So, your manager just finished a SCRUM course, because your enterprise company thinks it is the cutting-edge management process and now everything should be SCRUM or something very close…

Are you doing SCRUM?

How much time do you dedicate to sprint planning?

Do you have a fixed, cross-functional and autonomous team assigned to fixed length sprints full time?

Do you have a dedicated person for managing business requirements inside a backlog?

Are you taking short (5 min per person) daily stand-up meetings where everyone shares just the blocking points to the rest of the team and the Scrum master?

Are you sure you need Scrum?

Applying a complex methodology when you are in a deep technical depth situation will just make the things worse.
It is what Martin Fowler calls Flaccid Scrum.

In this case what you really need to do first is to increment your delivery fluency starting from practices like Continuous Delivery or applying pragmatic methodologies like Extreme Programming.

For many people, this situation is exacerbated by Scrum because Scrum is a process that’s centered on project management techniques and deliberately omits any technical practices, in contrast to (for example) Extreme Programming.

Martin Fowler

Fluent Delivering teams not only focus on business value, they realize that value by shipping as often as their market will accept it. This is called “shipping on the market’s cadence.”

Delivering teams are distinguished from Focusing teams not only by their ability to ship, but their ability to ship at will.

Extreme Programming (XP) pioneered many of the techniques used by delivering teams and it remains a major influence today. Nearly all fluent teams use its major innovations, such as continuous integration, test-driven
development, and “merciless” refactoring.

In recent years, the DevOps movement has extended XP’s ideas to modern cloud-based environments.

Triple constraint triangle

Comparing Scrum with Lean

So, let’s say your company’s managers already read this article and its related sources, so you’re really going fast on your CI/C processes and almost everything is versioned and monitored…

How to manage that in a big company with a lot of distributed teams?

Let’s give a fast look to Lean and then to Disciplined Agile Delivery.

SCHEDULE / TIME

Agile: fixed timeboxes and release plans are used to schedule your next activities. You need to sort your activities in order to plan your tasks by priority in a managed backlog.

Lean: the schedule can vary based on priority of the tasks exposed in a Kanban board that should be always visible by every one. No need for all the team to be full time on one task, the experts can use a divide-and-conquer approach, focusing on the most critical parts first and releasing when it is possible, following the customer Service Agreements.

SCOPE

Agile: the sprint backlog will contain the minimum scope necessary to develop the next product release

Lean: the tasks are generated by customer tickets where they specify also the urgency level.

BUDGET

Agile: ROI and Burndown charts are used to monitor budget during the project

Lean: KPI and Service Level Agreement are used to continuously check product quality and the production chain efficiency

Disciplined Agile Delivery

The Disciplined Agile Delivery (DAD) process framework is a peoplefirst,
learning-oriented hybrid agile approach to IT solution delivery. It
has a risk-value lifecycle, is goal-driven, is scalable, and is enterprise
aware.

Here the differences from Scrum, Lean and Disciplined Agile Delivery.

PEOPLE

Keep the docs at the really minimum.
The traditional approach of having formal handoffs of work products (primarily documents) between different disciplines such as requirements, analysis, design, test, and development is a very poor way to transfer knowledge that creates bottlenecks and proves in practice to be a huge source of waste of both time and money.

Teams should be cross-functional with no internal hierarchy. In Scrum for instance, there are only three Scrum team roles: Scrum Master, product owner, and team member. The primary roles described by DAD are stakeholder, team lead, team member, product owner, and architecture owner.

LEARNING

The first aspect is domain learning: how are you exploring and identifying what your stakeholders need, and
perhaps more importantly, how are you helping the team to do so?

The second aspect is process learning, which focuses on learning to improve your process at the individual, team, and enterprise levels.

The third aspect is technical learning, which focuses on understanding how to effectively work with the tools and technologies being used to craft the solution for your stakeholders.

What may not be so obvious is the move away from promoting specialization among your staff and instead fostering a move toward people with more robust skills, something called being a generalizing specialist.
Progressive organizations aggressively promote learning opportunities for their people outside their specific areas of
specialty
as well as opportunities to actually apply these new skills.

HYBRID PROCESS

DAD will take elements from the other methodologies to tailor a process that best suites an enterprise agile team:

  • prioritized backlog from Scrum
  • Kanban dashboard and limit work in progress approach from Kanban (Toyota production system)
  • Agile way to manage data a and documents
  • CI/CD, TDD, collective ownership practices from Extreme Programming and DevOps

IT SOLUTIONS OVER SOFTWARE

As IT professionals we do far more than just develop software. Yes, software is clearly important, but in addressing the needs of our stakeholders we often provide new or upgraded hardware, change the business/operational processes that stakeholders follow, and even help change the organizational structure in which our stakeholders work.

Agile was created mostly by developers and consultants, we need to focus more on business needs and company processes optimizations.

Goal-Driven Delivery Lifecycle

  • It is a delivery process extending the Scrum one, starting from the initial vision to the release in production;
  • explicit phases: Inception, Construction and Transition;
    • Inception: initiate team, schedule stakeholders meetings, requirements collection, architecture design, align with company policies, release planning, set up environment
    • Construction: CI, CD, burndown charts, TDD, refactoring, retrospective, etc..
    • Transition: delivering in production. This stage contains steps like UAT, data migration, support environment preparation, stakeholders alignment, finalize documentation and solution deployment.
  • put the phases in the right context: evaluate system preparation activities before development start and management of the system by other groups after the final release
  • explicit milestones

Conclusions

Here we have seen, shortly, the main differences from Scrum, Lean and Disciplined Agile Delivery.

DAD is a very complex process and to find out the details there is just THE book to read in the final references.

A complete enterprise delivery process is something that requires months of work by an architecture board, but the point here is how to take the right direction as soon as possible, avoiding being hypnotized by buzz-words like Scrum or thinking that we are really agile just because we do a hour stand-up meeting every morning.

Start from removing your technical depth following firmly EP and DevOps practices. Then start formalizing your process methodology and make sure every one is walking on the same path.

REFERENCES:

Powered by WordPress & Theme by Anders Norén