From cc213dcbd9d426a85b987c10f04875c7c9e560c8 Mon Sep 17 00:00:00 2001 From: Amitabh-DevOps Date: Tue, 11 Feb 2025 17:36:53 +0300 Subject: [PATCH 01/33] Added task for week4 --- 2025/git/README.md | 211 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 211 insertions(+) diff --git a/2025/git/README.md b/2025/git/README.md index 8b13789179..03975b518b 100644 --- a/2025/git/README.md +++ b/2025/git/README.md @@ -1 +1,212 @@ +# Week 4: Git and GitHub Challenge +Welcome to the Week 4 Challenge! In this task you will practice the essential Git and GitHub commands and concepts taught by Shubham Bhaiya. This includes: + +- **Git Basics:** `git init`, `git add`, `git commit` +- **Repository Management:** `git clone`, forking a repository, and understanding how a GitHub repo is made +- **Branching:** Creating branches (`git branch`), switching between branches (`git switch` / `git checkout`), and viewing commit history (`git log`) +- **Authentication:** Pushing and pulling using a Personal Access Token (PAT) +- **Critical Thinking:** Explaining why branching strategies are important in collaborative development + +To make this challenge more difficult, additional steps have been added. You will also be required to explore SSH authentication as a bonus task. Complete all the tasks and document every step in `solution.md`. Finally, share your experience on LinkedIn (details provided at the end). + +--- + +## Challenge Tasks + +### Task 1: Fork and Clone the Repository +1. **Fork the Repository:** + - Visit [this repository](https://github.com/LondheShubham153/90DaysOfDevOps) and fork it to your own GitHub account. If not done yet. + +2. **Clone Your Fork Locally:** + - Clone the forked repository using HTTPS: + ```bash + git clone + ``` + - Change directory into the cloned repository: + ```bash + cd 2025/git + ``` + +--- + +### Task 2: Initialize a Local Repository and Create a File +1. **Set Up Your Challenge Directory:** + - Inside the cloned repository, create a new directory for this challenge: + ```bash + mkdir week-4-challenge + cd week-4-challenge + ``` + +2. **Initialize a Git Repository:** + - Initialize the directory as a new Git repository: + ```bash + git init + ``` + +3. **Create a File:** + - Create a file named `info.txt` and add some initial content (for example, your name and a brief introduction). + +4. **Stage and Commit Your File:** + - Stage the file: + ```bash + git add info.txt + ``` + - Commit the file with a descriptive message: + ```bash + git commit -m "Initial commit: Add info.txt with introductory content" + ``` + +--- + +## Task 3: Configure Remote URL with PAT and Push/Pull + +1. **Configure Remote URL with Your PAT:** + To avoid entering your Personal Access Token (PAT) every time you push or pull, update your remote URL to include your credentials. + + **⚠️ Note:** Embedding your PAT in the URL is only for this exercise. It is not recommended for production use. + + Replace ``, ``, and `` with your actual GitHub username, your PAT, and the repository name respectively: + + ```bash + git remote add origin https://:@github.com//90DaysOfDevOps.git + ``` + If a remote named `origin` already exists, update it with: + ```bash + git remote set-url origin https://:@github.com//90DaysOfDevOps.git + ``` +2. **Push Your Commit to Remote:** + - Push your current branch (typically `main`) and set the upstream: + ```bash + git push -u origin main + ``` +3. **(Optional) Pull Remote Changes:** + - Verify your configuration by pulling changes: + ```bash + git pull origin main + ``` + +--- + +### Task 4: Explore Your Commit History +1. **View the Git Log:** + - Check your commit history using: + ```bash + git log + ``` + - Take note of the commit hash and details as you will reference these in your documentation. + +--- + +### Task 5: Advanced Branching and Switching +1. **Create a New Branch:** + - Create a branch called `feature-update`: + ```bash + git branch feature-update + ``` + +2. **Switch to the New Branch:** + - Switch using `git switch`: + ```bash + git switch feature-update + ``` + - Alternatively, you can use: + ```bash + git checkout feature-update + ``` + +3. **Modify the File and Commit Changes:** + - Edit `info.txt` (for example, add more details or improvements). + - Stage and commit your changes: + ```bash + git add info.txt + git commit -m "Feature update: Enhance info.txt with additional details" + git push origin feature-update + ``` + - Merge this branch to `main` via a Pull Request on GitHub. + +4. **(Advanced) Optional Extra Challenge:** + - If you feel confident, create another branch (e.g., `experimental`) from your main branch, make a conflicting change to `info.txt`, then switch back to `feature-update` and merge `experimental` to simulate a merge conflict. Resolve the conflict manually, then commit the resolution. + > *Note: This extra step is optional and intended for those looking for an additional challenge.* + +--- + +### Task 6: Explain Branching Strategies +1. **Document Your Process:** + - Create (or update) a file named `solution.md` in your repository. + - List all the Git commands you used in Tasks 1–4. + - **Explain:** Write a brief explanation on **why branching strategies are important** in collaborative development. Consider addressing: + - Isolating features and bug fixes + - Facilitating parallel development + - Reducing merge conflicts + - Enabling effective code reviews + +--- + +### Bonus Task: Explore SSH Authentication +1. **Generate an SSH Key (if not already set up):** + - Create an SSH key pair: + ```bash + ssh-keygen + ``` + - Follow the prompts and then locate your public key (typically found at `~/.ssh/id_ed25519.pub`). + +2. **Add Your SSH Public Key to GitHub:** + - Copy the contents of your public key and add it to your GitHub account under **SSH and GPG keys**. + (See [Connecting to GitHub with SSH](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) for help.) + +3. **Switch Your Remote URL to SSH:** + - Change the remote URL from HTTPS to SSH: + ```bash + git remote set-url origin git@github.com:/90DaysOfDevOps.git + ``` + +4. **Push Your Branch Using SSH:** + - Test the SSH connection by pushing your branch: + ```bash + git push origin feature-update + ``` + +--- + +## 📢 How to Submit + +1. **Push Your Final Work:** + - Ensure your branch (e.g., `feature-update`) with the updated `solution.md` file is pushed to your fork. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch to the main repository. + - Use a clear title such as: + ``` + Week 4 Challenge - DevOps Batch 9: Git & GitHub Advanced Challenge + ``` + - In the PR description, summarize your process and list the Git commands you used. + +3. **Share Your Experience on LinkedIn:** + - Write a LinkedIn post summarizing your Week 4 experience. + - Include screenshots or logs of your tasks. + - Use hashtags: **#90DaysOfDevOps #GitGithub #DevOps** + - Optionally, share any blog posts, GitHub repos, or articles you create about this challenge. + +--- + +## Additional Resources + +- **Git Documentation:** + [https://git-scm.com/docs](https://git-scm.com/docs) + +- **Creating a Personal Access Token:** + [GitHub PAT Setup](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) + +- **Forking and Cloning Repositories:** + [Fork a Repository](https://docs.github.com/en/get-started/quickstart/fork-a-repo) | [Cloning a Repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) + +- **SSH Authentication with GitHub:** + [Connecting to GitHub with SSH](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) + +- **Understanding Branching Strategies:** + [Git Branching Strategies](https://www.atlassian.com/git/tutorials/comparing-workflows) + +--- + +Happy coding and best of luck with this challenge! Document your journey thoroughly and be sure to explore the additional resources if you get stuck. From 1fcbc18f5a65d8848e91b75ac29448cf24e7a3e5 Mon Sep 17 00:00:00 2001 From: Amitabh-DevOps Date: Tue, 11 Feb 2025 17:52:24 +0300 Subject: [PATCH 02/33] Added Week4 tasks --- 2025/git/{ => 01_Git_and_Github_Basics}/README.md | 0 2025/git/02_Git_and_Github_Advanced/README.md | 0 2 files changed, 0 insertions(+), 0 deletions(-) rename 2025/git/{ => 01_Git_and_Github_Basics}/README.md (100%) create mode 100644 2025/git/02_Git_and_Github_Advanced/README.md diff --git a/2025/git/README.md b/2025/git/01_Git_and_Github_Basics/README.md similarity index 100% rename from 2025/git/README.md rename to 2025/git/01_Git_and_Github_Basics/README.md diff --git a/2025/git/02_Git_and_Github_Advanced/README.md b/2025/git/02_Git_and_Github_Advanced/README.md new file mode 100644 index 0000000000..e69de29bb2 From e984c0d9120013ec33d2bec35431f4944aec9292 Mon Sep 17 00:00:00 2001 From: Amitabh-DevOps Date: Tue, 11 Feb 2025 17:53:52 +0300 Subject: [PATCH 03/33] Updated Week4 tasks --- 2025/git/01_Git_and_Github_Basics/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/2025/git/01_Git_and_Github_Basics/README.md b/2025/git/01_Git_and_Github_Basics/README.md index 03975b518b..589e08c57c 100644 --- a/2025/git/01_Git_and_Github_Basics/README.md +++ b/2025/git/01_Git_and_Github_Basics/README.md @@ -25,7 +25,7 @@ To make this challenge more difficult, additional steps have been added. You wil ``` - Change directory into the cloned repository: ```bash - cd 2025/git + cd 2025/git/01_Git_and_Github_Basics ``` --- From e09bfc593f8c07c3cee99b499feabc69e8be8866 Mon Sep 17 00:00:00 2001 From: Amitabh-DevOps Date: Mon, 17 Feb 2025 06:05:00 +0300 Subject: [PATCH 04/33] Added task for week5-Docker --- 2025/docker/README.md | 234 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 234 insertions(+) diff --git a/2025/docker/README.md b/2025/docker/README.md index 8b13789179..c66ad04d28 100644 --- a/2025/docker/README.md +++ b/2025/docker/README.md @@ -1 +1,235 @@ +# Week 5: Docker Challenge - DevOps Batch 9 +Welcome to the Week 5 Docker Challenge! In this task, you will work with Docker concepts and tools taught by Shubham Bhaiya. This challenge covers the following topics: + +- **Introduction and Purpose:** Understand Docker’s role in modern development. +- **Virtualization vs. Containerization:** Learn the differences and benefits. +- **Build Kya Hota Hai:** Understand the Docker build process. +- **Docker Terminologies:** Get familiar with key Docker terms. +- **Docker Components:** Explore Docker Engine, images, containers, and more. +- **Project Building Using Docker:** Containerize a sample project. +- **Multi-stage Docker Builds / Distroless Images:** Optimize your images. +- **Docker Hub (Push/Tag/Pull):** Manage and distribute your Docker images. +- **Docker Volumes:** Persist data across container runs. +- **Docker Networking:** Connect containers using networks. +- **Docker Compose:** Orchestrate multi-container applications. +- **Docker Scout:** Analyze your images for vulnerabilities and insights. + +Complete all the tasks below and document your steps, commands, and observations in a file named `solution.md`. Finally, share your experience on LinkedIn using the provided guidelines. + +--- + +## Challenge Tasks + +### Task 1: Introduction and Conceptual Understanding +1. **Write an Introduction:** + - In your `solution.md`, provide a brief explanation of Docker’s purpose in modern DevOps. + - Compare **Virtualization vs. Containerization** and explain why containerization is the preferred approach for microservices and CI/CD pipelines. + +--- + +### Task 2: Create a Dockerfile for a Sample Project +1. **Select or Create a Sample Application:** + - Choose a simple application (for example, a basic Node.js, Python, or Java app that prints “Hello, Docker!” or serves a simple web page). + +2. **Write a Dockerfile:** + - Create a `Dockerfile` that defines how to build an image for your application. + - Include comments in your Dockerfile explaining each instruction. + - Build your image using: + ```bash + docker build -t /sample-app:latest . + ``` + +3. **Verify Your Build:** + - Run your container locally to ensure it works as expected: + ```bash + docker run -d -p 8080:80 /sample-app:latest + ``` + - Verify the container is running with: + ```bash + docker ps + ``` + - Check logs using: + ```bash + docker logs + ``` + +--- + +### Task 3: Explore Docker Terminologies and Components +1. **Document Key Terminologies:** + - In your `solution.md`, list and briefly describe key Docker terms such as image, container, Dockerfile, volume, and network. + - Explain the main Docker components (Docker Engine, Docker Hub, etc.) and how they interact. + +--- + +### Task 4: Optimize Your Docker Image with Multi-Stage Builds +1. **Implement a Multi-Stage Docker Build:** + - Modify your existing `Dockerfile` to include multi-stage builds. + - Aim to produce a lightweight, **distroless** (or minimal) final image. +2. **Compare Image Sizes:** + - Build your image before and after the multi-stage build modification and compare their sizes using: + ```bash + docker images + ``` +3. **Document the Differences:** + - Explain in `solution.md` the benefits of multi-stage builds and the impact on image size. + +--- + +### Task 5: Manage Your Image with Docker Hub +1. **Tag Your Image:** + - Tag your image appropriately: + ```bash + docker tag /sample-app:latest /sample-app:v1.0 + ``` +2. **Push Your Image to Docker Hub:** + - Log in to Docker Hub if necessary: + ```bash + docker login + ``` + - Push the image: + ```bash + docker push /sample-app:v1.0 + ``` +3. **(Optional) Pull the Image:** + - Verify by pulling your image: + ```bash + docker pull /sample-app:v1.0 + ``` + +--- + +### Task 6: Persist Data with Docker Volumes +1. **Create a Docker Volume:** + - Create a Docker volume: + ```bash + docker volume create my_volume + ``` +2. **Run a Container with the Volume:** + - Run a container using the volume to persist data: + ```bash + docker run -d -v my_volume:/app/data /sample-app:v1.0 + ``` +3. **Document the Process:** + - In `solution.md`, explain how Docker volumes help with data persistence and why they are useful. + +--- + +### Task 7: Configure Docker Networking +1. **Create a Custom Docker Network:** + - Create a custom Docker network: + ```bash + docker network create my_network + ``` +2. **Run Containers on the Same Network:** + - Run two containers (e.g., your sample app and a simple database like MySQL) on the same network to demonstrate inter-container communication: + ```bash + docker run -d --name sample-app --network my_network /sample-app:v1.0 + docker run -d --name my-db --network my_network -e MYSQL_ROOT_PASSWORD=root mysql:latest + ``` +3. **Document the Process:** + - In `solution.md`, describe how Docker networking enables container communication and its significance in multi-container applications. + +--- + +### Task 8: Orchestrate with Docker Compose +1. **Create a docker-compose.yml File:** + - Write a `docker-compose.yml` file that defines at least two services (e.g., your sample app and a database). + - Include definitions for services, networks, and volumes. +2. **Deploy Your Application:** + - Bring up your application using: + ```bash + docker-compose up -d + ``` + - Test the setup, then shut it down using: + ```bash + docker-compose down + ``` +3. **Document the Process:** + - Explain each service and configuration in your `solution.md`. + +--- + +### Task 9: Analyze Your Image with Docker Scout +1. **Run Docker Scout Analysis:** + - Execute Docker Scout on your image to generate a detailed report of vulnerabilities and insights: + ```bash + docker scout cves /sample-app:v1.0 + ``` + - Alternatively, if available, run: + ```bash + docker scout quickview /sample-app:v1.0 + ``` + to get a summarized view of the image’s security posture. + - **Optional:** Save the output to a file for further analysis: + ```bash + docker scout cves /sample-app:v1.0 > scout_report.txt + ``` + +2. **Review and Interpret the Report:** + - Carefully review the output and focus on: + - **List of CVEs:** Identify vulnerabilities along with their severity ratings (e.g., Critical, High, Medium, Low). + - **Affected Layers/Dependencies:** Determine which image layers or dependencies are responsible for the vulnerabilities. + - **Suggested Remediations:** Note any recommended fixes or mitigation strategies provided by Docker Scout. + - **Comparison Step:** If possible, compare this report with previous builds to assess improvements or regressions in your image's security posture. + - If Docker Scout is not available in your environment, document that fact and consider using an alternative vulnerability scanner (e.g., Trivy, Clair) for a comparative analysis. + +3. **Document Your Findings:** + - In your `solution.md`, provide a detailed summary of your analysis: + - List the identified vulnerabilities along with their severity levels. + - Specify which layers or dependencies contributed to these vulnerabilities. + - Outline any actionable recommendations or remediation steps. + - Reflect on how these insights might influence your image optimization or overall security strategy. + - **Optional:** Include screenshots or attach the saved report file (`scout_report.txt`) as evidence of your analysis. + +--- + +### Task 10: Documentation and Critical Reflection +1. **Update `solution.md`:** + - List all the commands and steps you executed. + - Provide explanations for each task and detail any improvements made (e.g., image optimization with multi-stage builds). +2. **Reflect on Docker’s Impact:** + - Write a brief reflection on the importance of Docker in modern software development, discussing its benefits and potential challenges. + +--- + +## 📢 How to Submit + +1. **Push Your Final Work:** + - Ensure that your complete project—including your `Dockerfile`, `docker-compose.yml`, `solution.md`, and any additional files (e.g., the Docker Scout report if saved)—is committed and pushed to your repository. + - Verify that all your changes are visible in your repository. + +2. **Create a Pull Request (PR):** + - Open a PR from your working branch (e.g., `docker-challenge`) to the main repository. + - Use a clear and descriptive title, for example: + ``` + Week 5 Challenge - DevOps Batch 9: Docker Basics & Advanced Challenge + ``` + - In the PR description, include the following details: + - A brief summary of your approach and the tasks you completed. + - A list of the key Docker commands used during the challenge. + - Any insights or challenges you encountered (e.g., lessons learned from multi-stage builds or Docker Scout analysis). + +3. **Share Your Experience on LinkedIn:** + - Write a LinkedIn post summarizing your Week 5 Docker challenge experience. + - In your post, include: + - A brief description of the challenge and what you learned. + - Screenshots, logs, or excerpts from your `solution.md` that highlight key steps or interesting findings (e.g., Docker Scout reports). + - The hashtags: **#90DaysOfDevOps #Docker #DevOps** + - Optionally, links to any blog posts or related GitHub repositories that further explain your journey. + +--- + +## Additional Resources + +- **[Docker Documentation](https://docs.docker.com/)** +- **[Docker Hub](https://hub.docker.com/)** +- **[Multi-stage Builds](https://docs.docker.com/develop/develop-images/multistage-build/)** +- **[Docker Compose](https://docs.docker.com/compose/)** +- **[Docker Scout](https://www.docker.com/blog/docker-scout-beta-docker-experimental/)** +- **[Containerization vs. Virtualization](https://www.docker.com/resources/what-container)** + +--- + +Happy coding and best of luck with this Docker challenge! Document your journey thoroughly in `solution.md` and refer to these resources for additional guidance. From ca7633c209a78eb4177f1a8b659fbb61a779da39 Mon Sep 17 00:00:00 2001 From: Amitabh-DevOps Date: Mon, 17 Feb 2025 06:06:54 +0300 Subject: [PATCH 05/33] Updated task for week5-Docker --- 2025/docker/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/2025/docker/README.md b/2025/docker/README.md index c66ad04d28..96a2aec7fa 100644 --- a/2025/docker/README.md +++ b/2025/docker/README.md @@ -1,4 +1,4 @@ -# Week 5: Docker Challenge - DevOps Batch 9 +# Week 5: Docker Basics & Advanced Challenge Welcome to the Week 5 Docker Challenge! In this task, you will work with Docker concepts and tools taught by Shubham Bhaiya. This challenge covers the following topics: From 6202ead2e756408b665eae80c21312fb8370a784 Mon Sep 17 00:00:00 2001 From: Amitabh-DevOps Date: Mon, 17 Feb 2025 06:11:20 +0300 Subject: [PATCH 06/33] Updated task for week5-Docker --- 2025/docker/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/2025/docker/README.md b/2025/docker/README.md index 96a2aec7fa..878345f660 100644 --- a/2025/docker/README.md +++ b/2025/docker/README.md @@ -227,8 +227,8 @@ Complete all the tasks below and document your steps, commands, and observations - **[Docker Hub](https://hub.docker.com/)** - **[Multi-stage Builds](https://docs.docker.com/develop/develop-images/multistage-build/)** - **[Docker Compose](https://docs.docker.com/compose/)** -- **[Docker Scout](https://www.docker.com/blog/docker-scout-beta-docker-experimental/)** -- **[Containerization vs. Virtualization](https://www.docker.com/resources/what-container)** +- **[Docker Scan (Vulnerability Scanning)](https://docs.docker.com/engine/scan/)** +- **[Containerization vs. Virtualization](https://www.docker.com/resources/what-container)** --- From b932cec10732813a751de4663326cc8f23c33310 Mon Sep 17 00:00:00 2001 From: Amitabh-DevOps Date: Mon, 17 Feb 2025 06:14:18 +0300 Subject: [PATCH 07/33] Updated task for week5-Docker --- 2025/docker/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/2025/docker/README.md b/2025/docker/README.md index 878345f660..194a3ac090 100644 --- a/2025/docker/README.md +++ b/2025/docker/README.md @@ -224,7 +224,7 @@ Complete all the tasks below and document your steps, commands, and observations ## Additional Resources - **[Docker Documentation](https://docs.docker.com/)** -- **[Docker Hub](https://hub.docker.com/)** +- **[Docker Hub](https://docs.docker.com/docker-hub/)** - **[Multi-stage Builds](https://docs.docker.com/develop/develop-images/multistage-build/)** - **[Docker Compose](https://docs.docker.com/compose/)** - **[Docker Scan (Vulnerability Scanning)](https://docs.docker.com/engine/scan/)** From 784669b3e38a7bdd6065cea0fff1a9de06327f58 Mon Sep 17 00:00:00 2001 From: Amitabh-DevOps Date: Mon, 17 Feb 2025 06:52:04 +0300 Subject: [PATCH 08/33] Added task for week4 : Git & Github Advanced --- 2025/git/02_Git_and_Github_Advanced/README.md | 208 ++++++++++++++++++ 1 file changed, 208 insertions(+) diff --git a/2025/git/02_Git_and_Github_Advanced/README.md b/2025/git/02_Git_and_Github_Advanced/README.md index e69de29bb2..5b9e775252 100644 --- a/2025/git/02_Git_and_Github_Advanced/README.md +++ b/2025/git/02_Git_and_Github_Advanced/README.md @@ -0,0 +1,208 @@ +# Week 4: Git & GitHub Advanced Challenge + +This challenge covers advanced Git concepts essential for real-world DevOps workflows. By the end of this challenge, you will: + +- Understand how to work with Pull Requests effectively. +- Learn to undo changes using Reset & Revert. +- Use Stashing to manage uncommitted work. +- Apply Cherry-picking for selective commits. +- Keep a clean commit history using Rebasing. +- Learn industry-standard Branching Strategies. + +## **Topics Covered** +1. Pull Requests – Collaborating in teams. +2. Reset & Revert – Undo changes safely. +3. Stashing – Saving work temporarily. +4. Cherry-picking – Selecting specific commits. +5. Rebasing – Maintaining a clean history. +6. Branching Strategies – Industry best practices. + +## **Challenge Tasks** + +### **Task 1: Working with Pull Requests (PRs)** +**Scenario:** You are working on a new feature and need to merge your changes into the main branch using a Pull Request. + +1. Fork a repository and clone it locally. + ```bash + git clone + cd + ``` +2. Create a feature branch and make changes. + ```bash + git checkout -b feature-branch + echo "New Feature" >> feature.txt + git add . + git commit -m "Added a new feature" + ``` +3. Push the changes and create a Pull Request. + ```bash + git push origin feature-branch + ``` +4. Open a PR on GitHub, request a review, and merge it once approved. + +**Document in `solution.md`** +- Steps to create a PR. +- Best practices for writing PR descriptions. +- Handling review comments. + +--- + +### **Task 2: Undoing Changes – Reset & Revert** +**Scenario:** You accidentally committed incorrect changes and need to undo them. + +1. Create and modify a file. + ```bash + echo "Wrong code" >> wrong.txt + git add . + git commit -m "Committed by mistake" + ``` +2. Soft Reset (keeps changes staged). + ```bash + git reset --soft HEAD~1 + ``` +3. Mixed Reset (unstages changes but keeps files). + ```bash + git reset --mixed HEAD~1 + ``` +4. Hard Reset (removes all changes). + ```bash + git reset --hard HEAD~1 + ``` +5. Revert a commit safely. + ```bash + git revert HEAD + ``` + +**Document in `solution.md`** +- Differences between `reset` and `revert`. +- When to use each method. + +--- + +### **Task 3: Stashing - Save Work Without Committing** +**Scenario:** You need to switch branches but don’t want to commit incomplete work. + +1. Modify a file without committing. + ```bash + echo "Temporary Change" >> temp.txt + git add temp.txt + ``` +2. Stash the changes. + ```bash + git stash + ``` +3. Switch to another branch and apply the stash. + ```bash + git checkout main + git stash pop + ``` + +**Document in `solution.md`** +- When to use `git stash`. +- Difference between `git stash pop` and `git stash apply`. + +--- + +### **Task 4: Cherry-Picking - Selectively Apply Commits** +**Scenario:** A bug fix exists in another branch, and you only want to apply that specific commit. + +1. Find the commit to cherry-pick. + ```bash + git log --oneline + ``` +2. Apply a specific commit to the current branch. + ```bash + git cherry-pick + ``` +3. Resolve conflicts if any. + ```bash + git cherry-pick --continue + ``` + +**Document in `solution.md`** +- How cherry-picking is used in bug fixes. +- Risks of cherry-picking. + +--- + +### **Task 5: Rebasing - Keeping a Clean Commit History** +**Scenario:** Your branch is behind the main branch and needs to be updated without extra merge commits. + +1. Fetch the latest changes. + ```bash + git fetch origin main + ``` +2. Rebase the feature branch onto main. + ```bash + git rebase origin/main + ``` +3. Resolve conflicts and continue. + ```bash + git rebase --continue + ``` + +**Document in `solution.md`** +- Difference between `merge` and `rebase`. +- Best practices for rebasing. + +--- + +### **Task 6: Branching Strategies Used in Companies** +**Scenario:** Understand real-world branching strategies used in DevOps workflows. + +1. Research and explain Git workflows: + - Git Flow (Feature, Release, Hotfix branches). + - GitHub Flow (Main + Feature branches). + - Trunk-Based Development (Continuous Integration). + +2. Simulate a Git workflow using branches. + ```bash + git branch feature-1 + git branch hotfix-1 + git checkout feature-1 + ``` + +**Document in `solution.md`** +- Which strategy is best for DevOps and CI/CD. +- Pros and cons of different workflows. + +--- + +## **How to Submit** + +1. **Push your work to GitHub.** + ```bash + git add . + git commit -m "Completed Git & GitHub Advanced Challenge" + git push origin main + ``` + +2. **Create a Pull Request.** + - Title: + ``` + Git & GitHub Advanced Challenge - Completed + ``` + - PR Description: + - Steps followed for each task. + - Screenshots or logs (if applicable). + - +3. **Share Your Experience on LinkedIn:** + - Write a LinkedIn post summarizing your Week 4 Git & GitHub challenge experience. + - In your post, include: + - A brief description of the challenge and what you learned. + - Screenshots or excerpts from your `solution.md` that highlight key steps or interesting findings. + - The hashtags: **#90DaysOfDevOps #Git #GitHub #VersionControl #DevOps** + - Optionally, links to any blog posts or related GitHub repositories that further explain your journey. + +--- + +## **Additional Resources** +- [Git Official Documentation](https://git-scm.com/doc) +- [Git Reset & Revert Guide](https://www.atlassian.com/git/tutorials/resetting-checking-out-and-reverting) +- [Git Stash Explained](https://git-scm.com/book/en/v2/Git-Tools-Stashing-and-Cleaning) +- [Cherry-Picking Best Practices](https://www.atlassian.com/git/tutorials/cherry-pick) +- [Branching Strategies for DevOps](https://www.atlassian.com/git/tutorials/comparing-workflows) + +--- + +Happy coding and best of luck with this challenge! Document your journey thoroughly and be sure to explore the additional resources if you get stuck. From 187682e55b2c48f6ea16e236c35de0ea474b5e48 Mon Sep 17 00:00:00 2001 From: Amitabh-DevOps Date: Mon, 24 Feb 2025 16:21:06 +0300 Subject: [PATCH 09/33] Added task for week-6 Jenkins --- 2025/cicd/README.md | 181 ++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 181 insertions(+) diff --git a/2025/cicd/README.md b/2025/cicd/README.md index 8b13789179..3000c157b1 100644 --- a/2025/cicd/README.md +++ b/2025/cicd/README.md @@ -1 +1,182 @@ +# Week 6: Jenkins Basics & Advanced Challenge +In this challenge, you will deepen your understanding of Jenkins and its advanced features—essential for building robust CI/CD pipelines in a cloud environment. You will explore the Jenkins UI, create and run pipelines, configure agents and RBAC, leverage Shared Libraries, and integrate vulnerability scanning using Trivy. + +## Topics Covered +- **Jenkins UI Flow** – Navigating and understanding the Jenkins dashboard. +- **Jenkins Pipelines** – Building and automating CI/CD workflows. +- **Automate CI/CD** – Using pipelines to streamline deployments. +- **Agents / Nodes** – Configuring distributed builds. +- **RBAC (Role-Based Access Control)** – Securing your Jenkins environment. +- **Shared Libraries** – Reusing pipeline code across projects. +- **Trivy** – Scanning Docker images for vulnerabilities. + +Complete each task below and document your steps, commands, and observations in a file named `solution.md`. Finally, share your experience on LinkedIn using the provided guidelines. + +--- + +## Challenge Tasks + +### Task 1: Explore the Jenkins UI Flow +1. **Log In and Navigate:** + - Access your Jenkins instance hosted on AWS. + - Explore the dashboard, job configurations, build history, and system logs. +2. **Document Your Observations:** + - In `solution.md`, describe the main sections of the Jenkins UI. + - Explain how you navigate between different views (jobs, builds, plugins, etc.). + +--- + +### Task 2: Create a Jenkins Pipeline Job for CI/CD +1. **Set Up a Pipeline Job:** + - Create a new Pipeline job in Jenkins. + - Write a basic Jenkinsfile that automates the build, test, and deployment of a sample application (e.g., a simple web app). + - Suggested stages: **Build**, **Test**, **Deploy**. +2. **Run and Verify the Pipeline:** + - Trigger the pipeline and ensure each stage runs successfully. + - Verify the execution by checking console logs and, if applicable, using `docker ps` to confirm container status. +3. **Document Your Pipeline:** + - In `solution.md`, include your Jenkinsfile code and explain the purpose of each stage. + +--- + +### Task 3: Configure Jenkins Agents / Nodes +1. **Set Up an Agent:** + - Connect an external agent (using a VM, Docker container, or cloud instance) to your Jenkins master. + - Configure the agent via "Manage Jenkins" → "Manage Nodes and Clouds". +2. **Assign Jobs to the Agent:** + - Modify your pipeline job (from Task 2) to run on the newly configured agent. +3. **Document the Process:** + - In `solution.md`, detail the steps taken to configure the agent. + - Explain how job assignment to agents improves scalability and parallel execution. + +--- + +### Task 4: Implement RBAC in Jenkins +1. **Configure Role-Based Access Control:** + - Set up different user roles (e.g., Admin, Developer, Viewer) using "Matrix-based security" or the Role Strategy Plugin. +2. **Test the Access Controls:** + - Create test user accounts and verify that each role has the appropriate permissions. +3. **Document Your Configuration:** + - In `solution.md`, explain your RBAC setup and its importance in securing your Jenkins environment. + +--- + +### Task 5: Utilize Jenkins Shared Libraries +1. **Create a Shared Library:** + - Develop a simple Shared Library containing reusable pipeline code (e.g., a common stage for running tests or sending notifications). + - Host the library in a separate Git repository. +2. **Integrate the Library:** + - Modify your Jenkinsfile from Task 2 to call functions or steps defined in your Shared Library. +3. **Document Your Implementation:** + - In `solution.md`, include examples of your shared library code. + - Explain how Shared Libraries enhance maintainability and consistency in pipeline code. + +--- + +### Task 6: Integrate Trivy for Vulnerability Scanning +1. **Add a Vulnerability Scanning Stage:** + - Update your Jenkins pipeline to include a stage that runs Trivy on your Docker image: + ```groovy + stage('Vulnerability Scan') { + steps { + sh 'trivy image /sample-app:v1.0' + } + } + ``` +2. **Review and Save the Scan Output:** + - Run the pipeline and capture the Trivy output. + - Optionally, save the output for analysis: + ```bash + trivy image /sample-app:v1.0 > trivy_report.txt + ``` +3. **Document Your Findings:** + - In `solution.md`, summarize the key vulnerabilities, their severity, and any recommended remediation steps. + - Reflect on how these insights can improve your image security. + +--- + +### Task 7: Documentation and Critical Reflection +1. **Update `solution.md`:** + - List all commands, configuration steps, and scripts used throughout the challenge. + - Provide detailed explanations for each task. +2. **Reflect on Jenkins in CI/CD:** + - Write a brief reflection on how Jenkins—with its pipelines, agents, RBAC, shared libraries, and vulnerability scanning—integrates into a modern DevOps workflow. + - Discuss any challenges faced and lessons learned. + +--- + +### Bonus Task: Integrate Email Notifications into Your Jenkins Pipeline +1. **Configure Email Notifications:** + - Ensure that your Jenkins instance is configured to send emails by setting up the SMTP server details under "Manage Jenkins" → "Configure System". +2. **Update Your Jenkinsfile:** + - Add a new stage to your pipeline that sends an email notification upon build completion. For example: + ```groovy + stage('Notify') { + steps { + emailext ( + subject: "Build Notification: ${env.JOB_NAME} - Build #${env.BUILD_NUMBER}", + body: "The build has completed successfully. Check the details at: ${env.BUILD_URL}", + recipientProviders: [[$class: 'DevelopersRecipientProvider']] + ) + } + } + ``` +3. **Test the Notification:** + - Trigger your pipeline and verify that an email is sent. +4. **Document the Integration:** + - In `solution.md`, explain how you configured email notifications, including any challenges and how you resolved them. + +--- + +**Troubleshooting Tips:** +- If your pipeline fails, review the console output for error messages and use `docker logs` for container-specific issues. +- Verify agent connectivity by checking the node status in "Manage Jenkins" → "Manage Nodes and Clouds." +- For RBAC issues, ensure that user permissions are correctly configured by testing with different roles. + +**Monitoring & Maintenance:** +- Regularly check Jenkins system logs and build histories to monitor performance. +- Use Jenkins plugins such as the Monitoring Plugin to gain insights into resource usage and build metrics. + +**Advanced Debugging:** +- Add `echo` statements in your Jenkinsfile to print environment variables and intermediate outputs. +- Enable verbose logging in Jenkins (if needed) to troubleshoot complex pipeline issues. +- Consider using the "Replay" feature in Jenkins to run modified pipeline scripts without committing changes. + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Ensure that all files (e.g., Jenkinsfile, configuration scripts, `solution.md`, etc.) are committed and pushed to your repository. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `jenkins-challenge`) to the main repository. + - **Title:** + ``` + Week 6 Challenge - DevOps Batch 9: Jenkins Basics & Advanced Challenge + ``` + - **PR Description:** + - Summarize your approach and list key commands and configurations. + - Include screenshots or logs as evidence of your work. + +3. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Jenkins challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., integration with Trivy, RBAC configuration, or email notifications). + - Use the hashtags: **#90DaysOfDevOps #Jenkins #CI/CD #DevOps** + - Optionally, provide links to your repository or blog posts detailing your journey. + +--- + +## Additional Resources + +- **[Jenkins Official Documentation](https://www.jenkins.io/doc/)** +- **[Jenkins Pipeline Documentation](https://www.jenkins.io/doc/book/pipeline/)** +- **[Jenkins Agents and Nodes](https://www.jenkins.io/doc/book/managing/nodes/)** +- **[Jenkins RBAC & Role Strategy Plugin](https://plugins.jenkins.io/role-strategy/)** +- **[Jenkins Shared Libraries](https://www.jenkins.io/doc/book/pipeline/shared-libraries/)** +- **[Trivy Vulnerability Scanner](https://trivy.dev/latest/docs/scanner/vulnerability/)** + +--- + +Complete this challenge, document your journey thoroughly in `solution.md`, and share your work to demonstrate your mastery of Basics & advanced Jenkins concepts. From b425cc17a77e32da995f3e114e7afebb71eb5210 Mon Sep 17 00:00:00 2001 From: Amitabh-DevOps Date: Wed, 26 Feb 2025 19:08:20 +0300 Subject: [PATCH 10/33] Updated task for week6 --- 2025/cicd/README.md | 292 ++++++++++++++++++++++++++++++-------------- 1 file changed, 199 insertions(+), 93 deletions(-) diff --git a/2025/cicd/README.md b/2025/cicd/README.md index 3000c157b1..2d68a9b891 100644 --- a/2025/cicd/README.md +++ b/2025/cicd/README.md @@ -1,32 +1,17 @@ -# Week 6: Jenkins Basics & Advanced Challenge -In this challenge, you will deepen your understanding of Jenkins and its advanced features—essential for building robust CI/CD pipelines in a cloud environment. You will explore the Jenkins UI, create and run pipelines, configure agents and RBAC, leverage Shared Libraries, and integrate vulnerability scanning using Trivy. +# Week 6 : Jenkins ( CI/CD ) Basics and Advanced real world challenge -## Topics Covered -- **Jenkins UI Flow** – Navigating and understanding the Jenkins dashboard. -- **Jenkins Pipelines** – Building and automating CI/CD workflows. -- **Automate CI/CD** – Using pipelines to streamline deployments. -- **Agents / Nodes** – Configuring distributed builds. -- **RBAC (Role-Based Access Control)** – Securing your Jenkins environment. -- **Shared Libraries** – Reusing pipeline code across projects. -- **Trivy** – Scanning Docker images for vulnerabilities. +This set of tasks is designed as part of the 90DaysOfDevOps challenge to simulate real-world scenarios you might encounter on the job or in technical interviews. By completing these tasks, you'll gain practical experience with advanced Jenkins topics, including pipelines, distributed agents, RBAC, shared libraries, vulnerability scanning, and automated notifications. -Complete each task below and document your steps, commands, and observations in a file named `solution.md`. Finally, share your experience on LinkedIn using the provided guidelines. +Complete each task and document all steps, commands, Screenshots, and observations in a file named `solution.md`. This documentation will serve as both your preparation guide and a portfolio piece for interviews. --- -## Challenge Tasks +## Task 1: Create a Jenkins Pipeline Job for CI/CD -### Task 1: Explore the Jenkins UI Flow -1. **Log In and Navigate:** - - Access your Jenkins instance hosted on AWS. - - Explore the dashboard, job configurations, build history, and system logs. -2. **Document Your Observations:** - - In `solution.md`, describe the main sections of the Jenkins UI. - - Explain how you navigate between different views (jobs, builds, plugins, etc.). +**Scenario:** +Create an end-to-end CI/CD pipeline for a sample application. ---- - -### Task 2: Create a Jenkins Pipeline Job for CI/CD +**Steps:** 1. **Set Up a Pipeline Job:** - Create a new Pipeline job in Jenkins. - Write a basic Jenkinsfile that automates the build, test, and deployment of a sample application (e.g., a simple web app). @@ -34,47 +19,118 @@ Complete each task below and document your steps, commands, and observations in 2. **Run and Verify the Pipeline:** - Trigger the pipeline and ensure each stage runs successfully. - Verify the execution by checking console logs and, if applicable, using `docker ps` to confirm container status. -3. **Document Your Pipeline:** - - In `solution.md`, include your Jenkinsfile code and explain the purpose of each stage. +3. **Document in `solution.md`:** + - Include your Jenkinsfile code and explain the purpose of each stage. + - Note any issues you encountered and how you resolved them. + +**Interview Questions:** +- How do declarative pipelines streamline the CI/CD process compared to scripted pipelines? +- What are the benefits of breaking the pipeline into distinct stages? --- -### Task 3: Configure Jenkins Agents / Nodes -1. **Set Up an Agent:** - - Connect an external agent (using a VM, Docker container, or cloud instance) to your Jenkins master. - - Configure the agent via "Manage Jenkins" → "Manage Nodes and Clouds". -2. **Assign Jobs to the Agent:** - - Modify your pipeline job (from Task 2) to run on the newly configured agent. -3. **Document the Process:** - - In `solution.md`, detail the steps taken to configure the agent. - - Explain how job assignment to agents improves scalability and parallel execution. +## Task 2: Build a Multi-Branch Pipeline for a Microservices Application + +**Scenario:** +You have a microservices-based application with multiple components stored in separate Git repositories. Your goal is to create a multi-branch pipeline that builds, tests, and deploys each service concurrently. + +**Steps:** +1. **Set Up a Multi-Branch Pipeline Job:** + - Create a new multi-branch pipeline in Jenkins. + - Configure it to scan your Git repository (or repositories) for branches. +2. **Develop a Jenkinsfile for Each Service:** + - Write a Jenkinsfile that includes stages for **Checkout**, **Build**, **Test**, and **Deploy**. + - Include parallel stages if applicable (e.g., running tests for different services concurrently). +3. **Simulate a Merge Scenario:** + - Create a feature branch and simulate a pull request workflow (using the Jenkins “Pipeline Multibranch” plugin with PR support if available). +4. **Document in `solution.md`:** + - List the Jenkinsfile(s) used, explain your pipeline design, and describe how multi-branch pipelines help manage microservices deployments in production. + +**Interview Questions:** +- How does a multi-branch pipeline improve continuous integration for microservices? +- What challenges might you face when merging feature branches in a multi-branch pipeline? --- -### Task 4: Implement RBAC in Jenkins -1. **Configure Role-Based Access Control:** - - Set up different user roles (e.g., Admin, Developer, Viewer) using "Matrix-based security" or the Role Strategy Plugin. -2. **Test the Access Controls:** - - Create test user accounts and verify that each role has the appropriate permissions. -3. **Document Your Configuration:** - - In `solution.md`, explain your RBAC setup and its importance in securing your Jenkins environment. +## Task 3: Configure and Scale Jenkins Agents/Nodes + +**Scenario:** +Your build workload has increased, and you need to configure multiple agents (across different OS types) to distribute the load. + +**Steps:** +1. **Set Up Multiple Agents:** + - Configure at least two agents (e.g., one Linux-based and one Windows-based) in Jenkins. + - Use Docker containers or VMs to simulate different environments. +2. **Label Agents:** + - Assign labels (e.g., `linux`, `windows`) and modify your Jenkinsfile to run appropriate stages on the correct agent. +3. **Run Parallel Jobs:** + - Create jobs that run in parallel across these agents. +4. **Document in `solution.md`:** + - Explain how you configured and verified each agent. + - Describe the benefits of distributed builds in terms of speed and reliability. + +**Interview Questions:** +- What are the benefits and challenges of using distributed agents in Jenkins? +- How can you ensure that jobs are assigned to the correct agent in a multi-platform environment? --- -### Task 5: Utilize Jenkins Shared Libraries -1. **Create a Shared Library:** - - Develop a simple Shared Library containing reusable pipeline code (e.g., a common stage for running tests or sending notifications). - - Host the library in a separate Git repository. +## Task 4: Implement and Test RBAC in a Multi-Team Environment + +**Scenario:** +In a large organization, different teams (developers, testers, and operations) require different levels of access to Jenkins. You need to configure RBAC to secure your CI/CD pipeline. + +**Steps:** +1. **Configure RBAC:** + - Use Matrix-based security or the Role Strategy Plugin to create roles (e.g., Admin, Developer, Tester). + - Define permissions for each role. +2. **Create Test Accounts:** + - Simulate real-world usage by creating user accounts for each role and verifying access. +3. **Document in `solution.md`:** + - Include screenshots or logs of your RBAC configuration. + - Explain the importance of access control and provide a potential risk scenario that RBAC helps mitigate. + +**Interview Questions:** +- Why is RBAC essential in a CI/CD environment, and what are the consequences of weak access control? +- Can you describe a scenario where inadequate RBAC could lead to security issues? + +--- + +## Task 5: Develop and Integrate a Jenkins Shared Library + +**Scenario:** +You are working on multiple pipelines that share common tasks (like code quality checks or deployment steps). To avoid duplication and ensure consistency, you need to develop a Shared Library. + +**Steps:** +1. **Create a Shared Library Repository:** + - Set up a separate Git repository that hosts your shared library code. + - Develop reusable functions (e.g., a function for sending notifications or a common test stage). 2. **Integrate the Library:** - - Modify your Jenkinsfile from Task 2 to call functions or steps defined in your Shared Library. -3. **Document Your Implementation:** - - In `solution.md`, include examples of your shared library code. - - Explain how Shared Libraries enhance maintainability and consistency in pipeline code. + - Update your Jenkinsfile(s) from previous tasks to load and use the shared library. + - Use syntax similar to: + ```groovy + @Library('my-shared-library') _ + pipeline { + // pipeline code using shared functions + } + ``` +3. **Document in `solution.md`:** + - Provide code examples from your shared library. + - Explain how this approach improves maintainability and reduces errors. + +**Interview Questions:** +- How do shared libraries contribute to code reuse and maintainability in large organizations? +- Provide an example of a function that would be ideal for a shared library and explain its benefits. --- -### Task 6: Integrate Trivy for Vulnerability Scanning -1. **Add a Vulnerability Scanning Stage:** +## Task 6: Integrate Vulnerability Scanning with Trivy + +**Scenario:** +Security is critical in CI/CD. You must ensure that the Docker images built in your pipeline are free from known vulnerabilities. + +**Steps:** +1. **Add a Vulnerability Scan Stage:** - Update your Jenkins pipeline to include a stage that runs Trivy on your Docker image: ```groovy stage('Vulnerability Scan') { @@ -83,90 +139,141 @@ Complete each task below and document your steps, commands, and observations in } } ``` -2. **Review and Save the Scan Output:** - - Run the pipeline and capture the Trivy output. - - Optionally, save the output for analysis: - ```bash - trivy image /sample-app:v1.0 > trivy_report.txt - ``` -3. **Document Your Findings:** - - In `solution.md`, summarize the key vulnerabilities, their severity, and any recommended remediation steps. - - Reflect on how these insights can improve your image security. +2. **Configure Fail Criteria:** + - Optionally, set the stage to fail the build if critical vulnerabilities are detected. +3. **Document in `solution.md`:** + - Summarize the scan output, note the vulnerabilities and severity, and describe any remediation steps. + - Reflect on the importance of automated security scanning in CI/CD pipelines. + +**Interview Questions:** +- Why is integrating vulnerability scanning into a CI/CD pipeline important? +- How does Trivy help improve the security of your Docker images? --- -### Task 7: Documentation and Critical Reflection -1. **Update `solution.md`:** - - List all commands, configuration steps, and scripts used throughout the challenge. - - Provide detailed explanations for each task. -2. **Reflect on Jenkins in CI/CD:** - - Write a brief reflection on how Jenkins—with its pipelines, agents, RBAC, shared libraries, and vulnerability scanning—integrates into a modern DevOps workflow. - - Discuss any challenges faced and lessons learned. +## Task 7: Dynamic Pipeline Parameterization + +**Scenario:** +In production environments, pipelines need to be flexible and configurable. Implement dynamic parameterization to allow the pipeline to accept runtime parameters (such as target environment, version numbers, or deployment options). + +**Steps:** +1. **Modify Your Jenkinsfile:** + - Update your Jenkinsfile to accept parameters. For example: + ```groovy + pipeline { + agent any + parameters { + string(name: 'TARGET_ENV', defaultValue: 'staging', description: 'Deployment target environment') + string(name: 'APP_VERSION', defaultValue: '1.0.0', description: 'Application version to deploy') + } + stages { + stage('Build') { + steps { + echo "Building version ${params.APP_VERSION} for ${params.TARGET_ENV} environment..." + // Build commands here + } + } + // Add other stages as needed + } + } + ``` +2. **Run the Parameterized Pipeline:** + - Trigger the pipeline and provide different parameter values to observe how the pipeline behavior changes. +3. **Document in `solution.md`:** + - Explain how parameterization makes the pipeline dynamic. + - Include sample outputs and discuss how this flexibility is useful in a production CI/CD environment. + +**Interview Questions:** +- How does pipeline parameterization improve the flexibility of CI/CD workflows? +- Provide an example of a scenario where dynamic parameters would be critical in a deployment pipeline. --- -### Bonus Task: Integrate Email Notifications into Your Jenkins Pipeline -1. **Configure Email Notifications:** - - Ensure that your Jenkins instance is configured to send emails by setting up the SMTP server details under "Manage Jenkins" → "Configure System". +## Task 8: Integrate Email Notifications for Build Events + +**Scenario:** +Automated notifications keep teams informed about build statuses. Configure Jenkins to send email alerts upon build completion or failure. + +**Steps:** +1. **Configure SMTP Settings:** + - Set up SMTP details in Jenkins under "Manage Jenkins" → "Configure System". 2. **Update Your Jenkinsfile:** - - Add a new stage to your pipeline that sends an email notification upon build completion. For example: + - Add a stage that uses the `emailext` plugin to send notifications: ```groovy stage('Notify') { steps { emailext ( subject: "Build Notification: ${env.JOB_NAME} - Build #${env.BUILD_NUMBER}", - body: "The build has completed successfully. Check the details at: ${env.BUILD_URL}", + body: "The build has completed successfully. Check details at: ${env.BUILD_URL}", recipientProviders: [[$class: 'DevelopersRecipientProvider']] ) } } ``` 3. **Test the Notification:** - - Trigger your pipeline and verify that an email is sent. -4. **Document the Integration:** - - In `solution.md`, explain how you configured email notifications, including any challenges and how you resolved them. + - Trigger the pipeline and verify that an email is sent. +4. **Document in `solution.md`:** + - Explain your configuration steps, note any challenges, and describe how you resolved them. + +**Interview Questions:** +- What are the advantages of automating email notifications in CI/CD? +- How would you troubleshoot issues if email notifications fail to send? --- -**Troubleshooting Tips:** -- If your pipeline fails, review the console output for error messages and use `docker logs` for container-specific issues. -- Verify agent connectivity by checking the node status in "Manage Jenkins" → "Manage Nodes and Clouds." -- For RBAC issues, ensure that user permissions are correctly configured by testing with different roles. +## Task 9: Troubleshooting, Monitoring & Advanced Debugging -**Monitoring & Maintenance:** -- Regularly check Jenkins system logs and build histories to monitor performance. -- Use Jenkins plugins such as the Monitoring Plugin to gain insights into resource usage and build metrics. +**Scenario:** +Real-world CI/CD pipelines sometimes fail. Demonstrate how you would troubleshoot and monitor your Jenkins environment. -**Advanced Debugging:** -- Add `echo` statements in your Jenkinsfile to print environment variables and intermediate outputs. -- Enable verbose logging in Jenkins (if needed) to troubleshoot complex pipeline issues. -- Consider using the "Replay" feature in Jenkins to run modified pipeline scripts without committing changes. +**Steps:** +1. **Troubleshooting:** + - Simulate a pipeline failure (e.g., by introducing an error in the Jenkinsfile) and document your troubleshooting process. + - Use commands like `docker logs` and review Jenkins console output. +2. **Monitoring:** + - Describe methods for monitoring Jenkins, such as using system logs or monitoring plugins. +3. **Advanced Debugging:** + - Add debugging statements (e.g., `echo` commands) in your Jenkinsfile to output environment variables or intermediate results. + - Use Jenkins' "Replay" feature to test modifications without committing changes. +4. **Document in `solution.md`:** + - Provide a detailed account of your troubleshooting, monitoring, and debugging strategies. + - Reflect on how these practices help maintain a stable CI/CD environment. + +**Interview Questions:** +- How would you approach troubleshooting a failing Jenkins pipeline? +- What are some effective strategies for monitoring Jenkins in a production environment? --- ## How to Submit 1. **Push Your Final Work to GitHub:** - - Ensure that all files (e.g., Jenkinsfile, configuration scripts, `solution.md`, etc.) are committed and pushed to your repository. + - Ensure all files (e.g., Jenkinsfile, configuration scripts, `solution.md`, etc.) are committed and pushed to your repository. 2. **Create a Pull Request (PR):** - Open a PR from your branch (e.g., `jenkins-challenge`) to the main repository. - **Title:** ``` - Week 6 Challenge - DevOps Batch 9: Jenkins Basics & Advanced Challenge + Week 6 Challenge - DevOps Batch 9: Jenkins CI/CD Challenge ``` - **PR Description:** - - Summarize your approach and list key commands and configurations. - - Include screenshots or logs as evidence of your work. + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. 3. **Share Your Experience on LinkedIn:** - Write a post summarizing your Jenkins challenge experience. - - Include key takeaways, challenges faced, and insights (e.g., integration with Trivy, RBAC configuration, or email notifications). - - Use the hashtags: **#90DaysOfDevOps #Jenkins #CI/CD #DevOps** + - Include key takeaways, challenges faced, and insights (e.g., agent configuration, RBAC, shared libraries, vulnerability scanning, and troubleshooting). + - Use the hashtags: **#90DaysOfDevOps #Jenkins #CI/CD #DevOps #InterviewPrep** - Optionally, provide links to your repository or blog posts detailing your journey. --- + +## TrainWithShubham Resources for Jenkins CI/CD + +- **[Jenkins Short notes](https://www.trainwithshubham.com/products/64aac20780964e534608664d?dgps_u=l&dgps_s=ucpd&dgps_t=cp_u&dgps_u_st=p&dgps_uid=66c972da3795a9659545d71a)** +- **[Jenkins One-Shot Video](https://youtu.be/XaSdKR2fOU4?si=eDmLQMSSh_eMPT_p)** +- **[TWS blog on Jenkins CI/CD](https://trainwithshubham.blog/automate-cicd-spring-boot-banking-app-jenkins-docker-github/)** + ## Additional Resources - **[Jenkins Official Documentation](https://www.jenkins.io/doc/)** @@ -178,5 +285,4 @@ Complete each task below and document your steps, commands, and observations in --- -Complete this challenge, document your journey thoroughly in `solution.md`, and share your work to demonstrate your mastery of Basics & advanced Jenkins concepts. - +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. \ No newline at end of file From 2471e6d999237e69c1f03c80e449443dc45a9411 Mon Sep 17 00:00:00 2001 From: Amitabh-DevOps Date: Mon, 3 Mar 2025 17:09:56 +0300 Subject: [PATCH 11/33] Added tasks for week 7 : Kubernetes --- 2025/kubernetes/README.md | 298 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 298 insertions(+) diff --git a/2025/kubernetes/README.md b/2025/kubernetes/README.md index 8b13789179..a35efdde1f 100644 --- a/2025/kubernetes/README.md +++ b/2025/kubernetes/README.md @@ -1 +1,299 @@ +# Week 7 : Kubernetes Basics & Advanced Challenges +This set of tasks is designed as part of the 90DaysOfDevOps challenge to simulate real-world scenarios you might encounter on the job or in technical interviews. By completing these tasks on the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop), you'll gain practical experience with advanced Kubernetes topics, including architecture, core objects, networking, storage management, configuration, autoscaling, security & access control, job scheduling, and bonus topics like Helm, Service Mesh, or AWS EKS. + +> [!IMPORTANT] +> +> 1. Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and implement all tasks on your fork. +> 2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. +> 3. Submit your `solution.md` file in the Week 7 (Kubernetes) task folder of the 90DaysOfDevOps repository. + +--- + +## Task 1: Understand Kubernetes Architecture & Deploy a Sample Pod + +**Scenario:** +Familiarize yourself with Kubernetes’ control plane and worker node components, then deploy a simple Pod manually. + +**Steps:** +1. **Study Kubernetes Architecture:** + - Review the roles of control plane components (API Server, Scheduler, Controller Manager, etcd, Cloud Controller) and worker node components (Kubelet, Container Runtime, Kube Proxy). +2. **Deploy a Sample Pod:** + - Create a YAML file (e.g., `pod.yaml`) to deploy a simple Pod (such as an NGINX container). + - Apply the YAML using: + ```bash + kubectl apply -f pod.yaml + ``` +3. **Document in `solution.md`:** + - Describe the Kubernetes architecture components. + - Include your Pod YAML and explain each section. + +> [!NOTE] +> +> **Interview Questions:** +> - Can you explain how the Kubernetes control plane components work together and the role of etcd in this architecture? +> - If a Pod fails to start, what steps would you take to diagnose the issue? + +--- + +## Task 2: Deploy and Manage Core Kubernetes Objects + +**Scenario:** +Deploy core Kubernetes objects for the online_shop application, including Deployments, ReplicaSets, StatefulSets, DaemonSets, and use Namespaces to isolate resources. + +**Steps:** +1. **Create a Namespace:** + - Write a YAML file to create a Namespace for the online_shop application. + - Apply the YAML: + ```bash + kubectl apply -f namespace.yaml + ``` +2. **Deploy a Deployment:** + - Create a YAML file for a Deployment (within your Namespace) that manages a set of Pods running a component of online_shop. + - Verify that a ReplicaSet is created automatically. +3. **Deploy a StatefulSet:** + - Write a YAML file for a StatefulSet (for example, for a database component) and apply it. +4. **Deploy a DaemonSet:** + - Create a YAML file for a DaemonSet to run a Pod on every node. +5. **Document in `solution.md`:** + - Include the YAML files for the Namespace, Deployment, StatefulSet, and DaemonSet. + - Explain the differences between these objects and when to use each. + +> [!NOTE] +> +> **Interview Questions:** +> - How does a Deployment ensure that the desired state of Pods is maintained in a cluster? +> - Can you explain the differences between a Deployment, StatefulSet, and DaemonSet, and provide an example scenario for each? + +--- + +## Task 3: Networking & Exposure – Create Services, Ingress, and Network Policies + +**Scenario:** +Expose your online_shop application to internal and external traffic by creating Services and configuring an Ingress, while using Network Policies to secure communication. + +**Steps:** +1. **Create a Service:** + - Write a YAML file for a Service of type ClusterIP. + - Modify the Service type to NodePort or LoadBalancer and apply the YAML. +2. **Configure an Ingress:** + - Create an Ingress resource to route external traffic to your application. +3. **Implement a Network Policy:** + - Write a YAML file for a Network Policy that restricts traffic to your application Pods. +4. **Document in `solution.md`:** + - Include the YAML files for your Service, Ingress, and Network Policy. + - Explain the differences between Service types and the roles of Ingress and Network Policies. + +> [!NOTE] +> +> **Interview Questions:** +> - How do NodePort and LoadBalancer Services differ in terms of exposure and use cases? +> - What is the role of a Network Policy in Kubernetes, and can you describe a scenario where it is essential? + +--- + +## Task 4: Storage Management – Use Persistent Volumes and Claims + +**Scenario:** +Deploy a component of the online_shop application that requires persistent storage by creating Persistent Volumes (PV), Persistent Volume Claims (PVC), and a StorageClass for dynamic provisioning. + +**Steps:** +1. **Create a Persistent Volume and Claim:** + - Write YAML files for a static PV and a corresponding PVC. +2. **Deploy an Application Using the PVC:** + - Modify a Pod or Deployment YAML to mount the PVC. +3. **Document in `solution.md`:** + - Include your PV, PVC, and application YAML. + - Explain how StorageClasses facilitate dynamic storage provisioning. + +> [!NOTE] +> +> **Interview Questions:** +> - What are the main differences between a Persistent Volume and a Persistent Volume Claim? +> - How does a StorageClass simplify storage management in Kubernetes? + +--- + +## Task 5: Configuration & Secrets Management with ConfigMaps and Secrets + +**Scenario:** +Deploy a component of the online_shop application that consumes external configuration and sensitive data using ConfigMaps and Secrets. + +**Steps:** +1. **Create a ConfigMap:** + - Write a YAML file for a ConfigMap containing configuration data. +2. **Create a Secret:** + - Write a YAML file for a Secret containing sensitive information. +3. **Deploy an Application:** + - Update your application YAML to mount the ConfigMap and Secret. +4. **Document in `solution.md`:** + - Include the YAML files and explain how the application uses these resources. + +> [!NOTE] +> +> **Interview Questions:** +> - How would you update a running application if a ConfigMap or Secret is modified? +> - What measures do you take to secure Secrets in Kubernetes? + +--- + +## Task 6: Autoscaling & Resource Management + +**Scenario:** +Implement autoscaling for a component of the online_shop application using the Horizontal Pod Autoscaler (HPA). Optionally, explore Vertical Pod Autoscaling (VPA) and ensure the Metrics Server is running. + +**Steps:** +1. **Deploy an Application with Resource Requests:** + - Deploy an application with defined resource requests and limits. +2. **Create an HPA Resource:** + - Write a YAML file for an HPA that scales the number of replicas based on CPU or memory usage. +3. **(Optional) Implement VPA & Metrics Server:** + - Optionally, deploy a VPA and verify that the Metrics Server is running. +4. **Document in `solution.md`:** + - Include the YAML files and explain how HPA (and optionally VPA) work. + - Discuss the benefits of autoscaling in production. + +> [!NOTE] +> +> **Interview Questions:** +> - What is the process by which the Horizontal Pod Autoscaler scales an application? +> - In what scenarios would vertical scaling (VPA) be more beneficial than horizontal scaling (HPA)? + +--- + +## Task 7: Security & Access Control + +**Scenario:** +Secure your Kubernetes cluster by implementing Role-Based Access Control (RBAC) and additional security measures. + +### Part A: RBAC Implementation +**Steps:** +1. **Configure RBAC:** + - Create roles and role bindings using YAML files for specific user groups (e.g., Admin, Developer, Tester). +2. **Create Test Accounts:** + - Simulate real-world usage by creating user accounts for each role and verifying access. +3. **Optional Enhancement:** + - Simulate an unauthorized action (e.g., a Developer attempting to delete a critical resource) and document how RBAC prevents it. + - Analyze RBAC logs (if available) to verify that unauthorized access attempts are recorded. +4. **Document in `solution.md`:** + - Include screenshots or logs of your RBAC configuration. + - Describe the roles, permissions, and potential risks mitigated by proper RBAC implementation. + +> [!NOTE] +> +> **Interview Questions:** +> - How do RBAC policies help secure a multi-team Kubernetes environment? +> - Can you provide an example of how improper RBAC could compromise a cluster? + +### Part B: Additional Security Controls +**Steps:** +1. **Set Up Taints & Tolerations:** + - Apply taints to nodes and specify tolerations in your Pod specifications. +2. **Define a Pod Disruption Budget (PDB):** + - Write a YAML file for a PDB to ensure a minimum number of Pods remain available during maintenance. +3. **Document in `solution.md`:** + - Include the YAML files and explain how taints, tolerations, and PDBs contribute to cluster stability and security. + +> [!NOTE] +> +> **Interview Questions:** +> - How do taints and tolerations ensure that critical workloads are isolated from interference? +> - Why are Pod Disruption Budgets important for maintaining application availability? + +--- + +## Task 8: Job Scheduling & Custom Resources + +**Scenario:** +Manage scheduled tasks and extend Kubernetes functionality by creating Jobs, CronJobs, and a Custom Resource Definition (CRD). + +**Steps:** +1. **Create a Job and CronJob:** + - Write YAML files for a Job (a one-time task) and a CronJob (a scheduled task). +2. **Create a Custom Resource Definition (CRD):** + - Write a YAML file for a CRD and use `kubectl` to create a custom resource. +3. **Document in `solution.md`:** + - Include the YAML files and explain the use cases for Jobs, CronJobs, and CRDs. + - Reflect on how CRDs extend Kubernetes capabilities. + +> [!NOTE] +> +> **Interview Questions:** +> - What factors would influence your decision to use a CronJob versus a Job? +> - How do CRDs enable custom extensions in Kubernetes? + +--- + +## Task 9: Bonus Task: Advanced Deployment with Helm, Service Mesh, or EKS + +**Scenario:** +For an added challenge, deploy a component of the online_shop application using Helm, implement a basic Service Mesh (e.g., Istio), or deploy your cluster on AWS EKS. + +**Steps:** +1. **Helm Deployment:** + - Create a Helm chart for your application. + - Deploy the application using Helm and perform an update. + - *OR* +2. **Service Mesh Implementation:** + - Deploy a basic Service Mesh (using Istio, Linkerd, or Consul) and demonstrate traffic management between services. + - *OR* +3. **Deploy on AWS EKS:** + - Set up an EKS cluster and deploy your application there. +4. **Document in `solution.md`:** + - Include your Helm chart files, Service Mesh configuration, or EKS deployment details. + - Explain the advantages of using Helm, a Service Mesh, or EKS in a production environment. + +> [!NOTE] +> +> **Interview Questions:** +> - How does Helm simplify application deployments in Kubernetes? +> - What are the benefits of using a Service Mesh in a microservices architecture? +> - How does deploying on AWS EKS compare with managing your own Kubernetes cluster? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Ensure all files (e.g., Manifest files, scripts, solution.md, etc.) are committed and pushed to your 90DaysOfDevOps repository. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `kubernetes-challenge`) to the main repository. + - **Title:** + ``` + Week 7 Challenge - DevOps Batch 9: Kubernetes Basics & Advanced Challenge + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Kubernetes challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., architecture, autoscaling, security, job scheduling, and advanced deployments). + - Use the hashtags: **#90DaysOfDevOps #Kubernetes #DevOps #InterviewPrep** + - Optionally, provide links to your fork or blog posts detailing your journey. + +--- + +## TrainWithShubham Resources for Kubernetes + +- **[Kubernetes Short Notes](https://www.trainwithshubham.com/products/6515573bf42fc83942cd112e?dgps_u=l&dgps_s=ucpd&dgps_t=cp_u&dgps_u_st=u&dgps_uid=66c972da3795a9659545d71a)** +- **[Kubernetes One-Shot Video](https://youtu.be/W04brGNgxN4?si=oPscVYz0VFzZig8Q)** +- **[TWS blog on Kubernetes](https://trainwithshubham.blog/)** + +--- + +## Additional Resources + +- **[Kubernetes Official Documentation](https://kubernetes.io/docs/)** +- **[Kubernetes Concepts](https://kubernetes.io/docs/concepts/)** +- **[Helm Documentation](https://helm.sh/docs/)** +- **[Istio Documentation](https://istio.io/latest/docs/)** +- **[Kubernetes RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)** +- **[Kubernetes Networking](https://kubernetes.io/docs/concepts/services-networking/)** +- **[Kubernetes Storage](https://kubernetes.io/docs/concepts/storage/)** +- **[Kubernetes Autoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/)** +- **[Kubernetes Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. From 57aa52acefe8379f3f1074bf1a053b9d3f0b942e Mon Sep 17 00:00:00 2001 From: Amitabh-DevOps Date: Tue, 4 Mar 2025 11:09:15 +0300 Subject: [PATCH 12/33] Updated task for week7 : Kubernetes --- 2025/kubernetes/README.md | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/2025/kubernetes/README.md b/2025/kubernetes/README.md index a35efdde1f..030d3fd81b 100644 --- a/2025/kubernetes/README.md +++ b/2025/kubernetes/README.md @@ -1,10 +1,10 @@ # Week 7 : Kubernetes Basics & Advanced Challenges -This set of tasks is designed as part of the 90DaysOfDevOps challenge to simulate real-world scenarios you might encounter on the job or in technical interviews. By completing these tasks on the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop), you'll gain practical experience with advanced Kubernetes topics, including architecture, core objects, networking, storage management, configuration, autoscaling, security & access control, job scheduling, and bonus topics like Helm, Service Mesh, or AWS EKS. +This set of tasks is designed as part of the 90DaysOfDevOps challenge to simulate real-world scenarios you might encounter on the job or in technical interviews. By completing these tasks on the [SpringBoot BankApp](https://github.com/Amitabh-DevOps/Springboot-BankApp), you'll gain practical experience with advanced Kubernetes topics, including architecture, core objects, networking, storage management, configuration, autoscaling, security & access control, job scheduling, and bonus topics like Helm, Service Mesh, or AWS EKS. > [!IMPORTANT] > -> 1. Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and implement all tasks on your fork. +> 1. Fork the [SpringBoot BankApp](https://github.com/Amitabh-DevOps/Springboot-BankApp) and implement all tasks on your fork. > 2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. > 3. Submit your `solution.md` file in the Week 7 (Kubernetes) task folder of the 90DaysOfDevOps repository. @@ -39,17 +39,17 @@ Familiarize yourself with Kubernetes’ control plane and worker node components ## Task 2: Deploy and Manage Core Kubernetes Objects **Scenario:** -Deploy core Kubernetes objects for the online_shop application, including Deployments, ReplicaSets, StatefulSets, DaemonSets, and use Namespaces to isolate resources. +Deploy core Kubernetes objects for the SpringBoot BankApp application, including Deployments, ReplicaSets, StatefulSets, DaemonSets, and use Namespaces to isolate resources. **Steps:** 1. **Create a Namespace:** - - Write a YAML file to create a Namespace for the online_shop application. + - Write a YAML file to create a Namespace for the SpringBoot BankApp application. - Apply the YAML: ```bash kubectl apply -f namespace.yaml ``` 2. **Deploy a Deployment:** - - Create a YAML file for a Deployment (within your Namespace) that manages a set of Pods running a component of online_shop. + - Create a YAML file for a Deployment (within your Namespace) that manages a set of Pods running a component of SpringBoot BankApp. - Verify that a ReplicaSet is created automatically. 3. **Deploy a StatefulSet:** - Write a YAML file for a StatefulSet (for example, for a database component) and apply it. @@ -70,7 +70,7 @@ Deploy core Kubernetes objects for the online_shop application, including Deploy ## Task 3: Networking & Exposure – Create Services, Ingress, and Network Policies **Scenario:** -Expose your online_shop application to internal and external traffic by creating Services and configuring an Ingress, while using Network Policies to secure communication. +Expose your SpringBoot BankApp application to internal and external traffic by creating Services and configuring an Ingress, while using Network Policies to secure communication. **Steps:** 1. **Create a Service:** @@ -95,7 +95,7 @@ Expose your online_shop application to internal and external traffic by creating ## Task 4: Storage Management – Use Persistent Volumes and Claims **Scenario:** -Deploy a component of the online_shop application that requires persistent storage by creating Persistent Volumes (PV), Persistent Volume Claims (PVC), and a StorageClass for dynamic provisioning. +Deploy a component of the SpringBoot BankApp application that requires persistent storage by creating Persistent Volumes (PV), Persistent Volume Claims (PVC), and a StorageClass for dynamic provisioning. **Steps:** 1. **Create a Persistent Volume and Claim:** @@ -117,7 +117,7 @@ Deploy a component of the online_shop application that requires persistent stora ## Task 5: Configuration & Secrets Management with ConfigMaps and Secrets **Scenario:** -Deploy a component of the online_shop application that consumes external configuration and sensitive data using ConfigMaps and Secrets. +Deploy a component of the SpringBoot BankApp application that consumes external configuration and sensitive data using ConfigMaps and Secrets. **Steps:** 1. **Create a ConfigMap:** @@ -140,7 +140,7 @@ Deploy a component of the online_shop application that consumes external configu ## Task 6: Autoscaling & Resource Management **Scenario:** -Implement autoscaling for a component of the online_shop application using the Horizontal Pod Autoscaler (HPA). Optionally, explore Vertical Pod Autoscaling (VPA) and ensure the Metrics Server is running. +Implement autoscaling for a component of the SpringBoot BankApp application using the Horizontal Pod Autoscaler (HPA). Optionally, explore Vertical Pod Autoscaling (VPA) and ensure the Metrics Server is running. **Steps:** 1. **Deploy an Application with Resource Requests:** @@ -227,7 +227,7 @@ Manage scheduled tasks and extend Kubernetes functionality by creating Jobs, Cro ## Task 9: Bonus Task: Advanced Deployment with Helm, Service Mesh, or EKS **Scenario:** -For an added challenge, deploy a component of the online_shop application using Helm, implement a basic Service Mesh (e.g., Istio), or deploy your cluster on AWS EKS. +For an added challenge, deploy a component of the SpringBoot BankApp application using Helm, implement a basic Service Mesh (e.g., Istio), or deploy your cluster on AWS EKS. **Steps:** 1. **Helm Deployment:** From 32ef666e695775f4a5c921fcfd5f72f71fc07f1b Mon Sep 17 00:00:00 2001 From: Amitabh-DevOps Date: Sun, 23 Mar 2025 09:19:27 +0530 Subject: [PATCH 13/33] Added task for week-8 : Terraform --- 2025/terraform/README.md | 227 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 227 insertions(+) diff --git a/2025/terraform/README.md b/2025/terraform/README.md index 8b13789179..26a696d37c 100644 --- a/2025/terraform/README.md +++ b/2025/terraform/README.md @@ -1 +1,228 @@ +# Week 8: Terraform (Infrastructure as Code) Challenge +This set of tasks is designed as part of the 90DaysOfDevOps challenge to simulate complex, real-world scenarios you might encounter on the job or in technical interviews. By completing these tasks on the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop), you'll gain practical experience with advanced Terraform topics, including provisioning, state management, variables, modules, workspaces, resource lifecycle management, drift detection, and environment management. + +**Important:** +1. Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and implement all tasks on your fork. +2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. +3. Submit your `solution.md` file in the Week 8 (Terraform) task folder of the 90DaysOfDevOps repository. + +--- + +## Task 1: Install Terraform, Initialize, and Provision a Basic Resource + +**Scenario:** +Begin by installing Terraform, initializing a project, and provisioning a basic resource (e.g., an AWS EC2 instance) to validate your setup. + +**Steps:** +1. **Install Terraform:** + - Download and install Terraform on your local machine. +2. **Initialize a Terraform Project:** + - Create a new directory for your Terraform project. + - Run `terraform init` to initialize the project. +3. **Provision a Basic Resource:** + - Create a configuration file (e.g., `main.tf`) to provision an AWS EC2 instance (or a similar resource for your cloud provider). + - Run `terraform apply` and confirm the changes. +4. **Document in `solution.md`:** + - Include the installation steps, your `main.tf` file, and the output of your `terraform apply` command. + +**Interview Questions:** +- How does Terraform manage resource creation and state? +- What is the significance of the `terraform init` command in a new project? + +--- + +## Task 2: Manage Terraform State with a Remote Backend + +**Scenario:** +Ensuring state consistency is critical when multiple team members work on infrastructure. Configure a remote backend (e.g., AWS S3 with DynamoDB for locking) to store your Terraform state file. + +**Steps:** +1. **Configure a Remote Backend:** + - Create a backend configuration in your `main.tf` or a separate backend file to configure a remote backend. +2. **Reinitialize Terraform:** + - Run `terraform init` to reinitialize your project with the new backend. +3. **Document in `solution.md`:** + - Include the backend configuration details. + - Explain the benefits of using a remote backend and state locking in collaborative environments. + +**Interview Questions:** +- Why is remote state management important in Terraform? +- How does state locking prevent conflicts during collaborative updates? + +--- + +## Task 3: Use Variables, Outputs, and Workspaces + +**Scenario:** +Improve the flexibility and reusability of your Terraform configuration by using variables, outputs, and workspaces to manage multiple environments. + +**Steps:** +1. **Define Variables and Outputs:** + - Create a `variables.tf` file to define configurable parameters (e.g., region, instance type). + - Create an `outputs.tf` file to output key information (e.g., public IP address of the EC2 instance). +2. **Implement Workspaces:** + - Use `terraform workspace new` to create separate workspaces for different environments (e.g., dev, staging, prod). +3. **Document in `solution.md`:** + - Include your `variables.tf`, `outputs.tf`, and a summary of your workspace setup. + - Explain how these features enable dynamic and multi-environment deployments. + +**Interview Questions:** +- How do variables and outputs enhance the reusability of Terraform configurations? +- What is the purpose of workspaces in Terraform, and how would you use them in a production scenario? + +--- + +## Task 4: Create and Use Terraform Modules + +**Scenario:** +Enhance reusability by creating a Terraform module for commonly used resources, and integrate it into your main configuration. + +**Steps:** +1. **Create a Module:** + - In a separate directory (e.g., `modules/ec2_instance`), create a module with `main.tf`, `variables.tf`, and `outputs.tf` for provisioning an EC2 instance. +2. **Reference the Module:** + - Update your main configuration to call the module using a `module` block. +3. **Document in `solution.md`:** + - Provide the module code and the main configuration. + - Explain how modules promote consistency and reduce code duplication. + +**Interview Questions:** +- What are the advantages of using modules in Terraform? +- How would you structure a module for reusable infrastructure components? + +--- + +## Task 5: Resource Dependencies and Lifecycle Management + +**Scenario:** +Ensure correct resource creation order and safe updates by managing dependencies and customizing resource lifecycles. + +**Steps:** +1. **Define Resource Dependencies:** + - Use the `depends_on` meta-argument in your configuration to specify dependencies explicitly. +2. **Configure Resource Lifecycles:** + - Add lifecycle blocks (e.g., `create_before_destroy`) in your resource definitions to manage updates safely. +3. **Document in `solution.md`:** + - Include examples of resource dependencies and lifecycle configurations in your code. + - Explain how these settings prevent downtime during updates. + +**Interview Questions:** +- How does Terraform handle resource dependencies? +- Can you explain the purpose of the `create_before_destroy` lifecycle argument? + +--- + +## Task 6: Infrastructure Drift Detection and Change Management + +**Scenario:** +In production, changes might occur outside of Terraform. Use Terraform commands to detect infrastructure drift and manage changes. + +**Steps:** +1. **Detect Drift:** + - Run `terraform plan` to identify differences between your configuration and the actual infrastructure. +2. **Reconcile Changes:** + - Describe your approach to updating the state or reapplying configurations when drift is detected. +3. **Document in `solution.md`:** + - Include examples of drift detection and your strategy for reconciling differences. + - Reflect on the importance of change management in infrastructure as code. + +**Interview Questions:** +- What is infrastructure drift, and why is it a concern in production environments? +- How would you resolve discrepancies between your Terraform configuration and actual infrastructure? + +--- + +## Task 7: (Optional) Dynamic Pipeline Parameterization for Terraform + +**Scenario:** +Enhance your Terraform configurations by using dynamic input parameters and conditional logic to deploy resources differently based on environment-specific values. + +**Steps:** +1. **Enhance Variables with Conditionals:** + - Update your `variables.tf` to include default values and conditional expressions for environment-specific configurations. +2. **Apply Conditional Logic:** + - Use conditional expressions in your resource definitions to adjust attributes based on variable values. +3. **Document in `solution.md`:** + - Explain how dynamic parameterization improves flexibility. + - Include sample outputs demonstrating different configurations. + +**Interview Questions:** +- How do conditional expressions in Terraform improve configuration flexibility? +- Provide an example scenario where dynamic parameters are critical in a deployment pipeline. + +--- + + +### **Bonus Task: Multi-Environment Setup with Terraform & Ansible ** + +**Scenario:** +Set up **AWS infrastructure** for multiple environments (dev, staging, prod) using **Terraform** for provisioning and **Ansible** for configuration. This includes installing both tools, creating dynamic inventories, and automating Nginx configuration across environments. + +1. **Install Tools:** + - Install **Terraform** and **Ansible** on your local machine. + +2. **Provision AWS Infrastructure with Terraform:** + - Create Terraform files to spin up EC2 instances (or similar resources) in dev, staging, and prod. + - Apply configurations (e.g., `terraform apply -var-file="dev.tfvars"`) for each environment. + +3. **Configure Hosts with Ansible:** + - Generate **dynamic inventories** (or separate inventory files) based on Terraform outputs. + - Write a playbook to install and configure **Nginx** across all environments. + - Run `ansible-playbook -i nginx_setup.yml` to automate the setup. + +4. **Automate & Document:** + - Ensure infrastructure changes are version-controlled. + - Place all steps, commands, and observations in `solution.md`. + +**Interview Questions :** +- **Terraform & Ansible Integration:** How do you share Terraform outputs (host details) with Ansible inventories? +- **Multi-Environment Management:** What strategies ensure consistency while keeping dev, staging, and prod isolated? +- **Nginx Configuration:** How do you handle environment-specific differences for Nginx setups? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and ensure all Terraform files (configuration files, modules, variable files, `solution.md`, etc.) are committed and pushed to your fork. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `terraform-challenge`) to the main repository. + - **Title:** + ``` + Week 8 Challenge - Terraform Infrastructure as Code Challenge + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Submit Your Documentation:** + - **Important:** Place your `solution.md` file in the Week 8 (Terraform) task folder of the 90DaysOfDevOps repository. + +4. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Terraform challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., state management, module usage, drift detection, multi-environment setups). + - Use the hashtags: **#90DaysOfDevOps #Terraform #DevOps #InterviewPrep** + - Optionally, provide links to your fork or blog posts detailing your journey. + +--- + +## TrainWithShubham Resources for Terraform + +- **[Terraform Short Notes](https://www.trainwithshubham.com/products/66d5c45f7345de4e9c1d8b05?dgps_u=l&dgps_s=ucpd&dgps_t=cp_u&dgps_u_st=u&dgps_uid=66c972da3795a9659545d71a)** +- **[Terraform One-Shot Video](https://youtu.be/S9mohJI_R34?si=QdRm-JrdKs8ZswXZ)** +- **[Multi-Environment Setup Blog](https://amitabhdevops.hashnode.dev/devops-project-multi-environment-infrastructure-with-terraform-and-ansible)** + +--- + +## Additional Resources + +- **[Terraform Official Documentation](https://www.terraform.io/docs/)** +- **[Terraform Providers](https://www.terraform.io/docs/providers/index.html)** +- **[Terraform Modules](https://www.terraform.io/docs/modules/index.html)** +- **[Terraform State Management](https://www.terraform.io/docs/state/index.html)** +- **[Terraform Workspaces](https://www.terraform.io/docs/language/state/workspaces.html)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. From 77602b100d45b27d433cd13ec3eda975d7201a82 Mon Sep 17 00:00:00 2001 From: Amitabh-DevOps Date: Fri, 28 Mar 2025 09:08:23 +0530 Subject: [PATCH 14/33] Added task for week-9 : Ansible --- 2025/ansible/README.md | 192 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 192 insertions(+) diff --git a/2025/ansible/README.md b/2025/ansible/README.md index 8b13789179..5721fffd01 100644 --- a/2025/ansible/README.md +++ b/2025/ansible/README.md @@ -1 +1,193 @@ +# Week 9: Ansible Automation Challenge +This set of tasks is part of the 90DaysOfDevOps challenge and focuses on solving real-world automation problems using Ansible. By completing these tasks on your designated Ansible project repository, you'll work on scenarios that mirror production environments and industry practices. The tasks cover installation, dynamic inventory management, robust playbook development, role organization, secure secret management, and orchestration of multi-tier applications. Your work will help you build practical skills and prepare for technical interviews. + +**Important:** +1. Fork or create your designated Ansible project repository (or use your own) and implement all tasks on your fork. +2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. +3. Submit your `solution.md` file in the Week 9 (Ansible) task folder of the 90DaysOfDevOps repository. + +--- + +## Task 1: Install Ansible and Configure a Dynamic Inventory + +**Real-World Scenario:** +In production, inventories change frequently. Set up Ansible with a dynamic inventory (using a script or AWS EC2 plugin) to automatically fetch and update target hosts. + +**Steps:** +1. **Install Ansible:** + - Follow the official installation guide to install Ansible on your local machine. +2. **Configure a Dynamic Inventory:** + - Set up a dynamic inventory using an inventory script or the AWS EC2 dynamic inventory plugin. +3. **Test Connectivity:** + - Run: + ```bash + ansible all -m ping -i dynamic_inventory.py + ``` + to ensure all servers are reachable. +4. **Document in `solution.md`:** + - Include your dynamic inventory configuration and test outputs. + - Explain how dynamic inventories adapt to a production environment. + +**Interview Questions:** +- How do dynamic inventories improve the management of production hosts? +- What challenges do dynamic inventory sources present and how can you mitigate them? + +--- + +## Task 2: Develop a Robust Playbook to Install and Configure Nginx + +**Real-World Scenario:** +Web servers like Nginx must be reliably deployed and configured in production. Create a playbook that installs Nginx, configures it using advanced Jinja2 templating (with loops, conditionals, and filters), and verifies that Nginx is running correctly. Incorporate asynchronous task execution with error handling for long-running operations. + +**Steps:** +1. **Create a Comprehensive Playbook:** + - Write a playbook (e.g., `nginx_setup.yml`) that: + - Installs Nginx. + - Deploys a templated Nginx configuration using a Jinja2 template (`nginx.conf.j2`) that includes loops and conditionals. + - Implements asynchronous execution (`async` and `poll`) with error handling. +2. **Test the Playbook:** + - Run the playbook against your dynamic inventory. +3. **Document in `solution.md`:** + - Include your playbook and Jinja2 template. + - Describe your strategies for asynchronous execution and error handling. + +**Interview Questions:** +- How do Jinja2 templates with loops and conditionals improve production configuration management? +- What are the challenges of managing long-running tasks with async in Ansible, and how do you handle errors? + +--- + +## Task 3: Organize Complex Playbooks Using Roles and Advanced Variables + +**Real-World Scenario:** +For large-scale production environments, organizing your playbooks into roles enhances maintainability and collaboration. Refactor your playbooks into roles (e.g., `nginx`, `app`, `db`) and use advanced variable files (with hierarchies and conditionals) to manage different configurations. + +**Steps:** +1. **Create Roles:** + - Develop roles for different components (e.g., `nginx`, `app`, `db`) with the standard directory structure (`tasks/`, `handlers/`, `templates/`, `vars/`). +2. **Utilize Advanced Variables:** + - Create hierarchical variable files with default values and override files for various scenarios. +3. **Refactor and Execute:** + - Update your composite playbook to include the roles. +4. **Document in `solution.md`:** + - Provide the role directory structure and sample variable files. + - Explain how this organization improves maintainability and flexibility. + +**Interview Questions:** +- How do roles improve scalability and collaboration in large-scale Ansible projects? +- What strategies do you use for variable precedence and hierarchy in complex environments? + +--- + +## Task 4: Secure Production Data with Advanced Ansible Vault Techniques + +**Real-World Scenario:** +In production, managing secrets securely is critical. Use Ansible Vault to encrypt sensitive data and explore advanced techniques like splitting secrets into multiple files and decrypting them at runtime. + +**Steps:** +1. **Create Encrypted Files:** + - Use `ansible-vault create` to encrypt multiple secret files. +2. **Integrate Vault in Your Playbooks:** + - Modify your playbooks to load encrypted variables from multiple files. +3. **Test Decryption:** + - Run your playbooks with the vault password to ensure proper decryption. +4. **Document in `solution.md`:** + - Outline your vault strategy and best practices (without exposing secrets). + - Explain the importance of secure secret management. + +**Interview Questions:** +- How does Ansible Vault secure sensitive data in production? +- What advanced techniques can you use for managing secrets at scale? + +--- + +## Task 5: Advanced Orchestration for Multi-Tier Deployments + +**Real-World Scenario:** +Deploy a multi-tier application (e.g., frontend, backend, and database) using Ansible roles to manage each tier. Use orchestration features (such as `serial`, `order`, and async execution) to ensure a smooth deployment process. + +**Steps:** +1. **Develop a Composite Playbook:** + - Write a playbook that calls multiple roles (e.g., `nginx` for frontend, `app` for backend, `db` for the database). +2. **Manage Execution Order and Async Tasks:** + - Use features like `serial` or `order` and implement asynchronous tasks with error handling where necessary. +3. **Document in `solution.md`:** + - Include your composite playbook and explain your orchestration strategy. + - Describe any asynchronous task handling and error management. + +**Interview Questions:** +- How do you orchestrate multi-tier deployments with Ansible? +- What are the challenges and solutions for asynchronous task execution in a multi-tier environment? + +--- + +## Bonus Task: Multi-Environment Setup with Terraform & Ansible + +**Real-World Scenario:** +Integrate Terraform and Ansible to provision and configure AWS infrastructure across multiple environments (dev, staging, prod). Use Terraform to provision resources using environment-specific variable files and use Ansible to configure them (e.g., install and configure Nginx). + +**Steps:** +1. **Provision with Terraform:** + - Create environment-specific variable files (e.g., `dev.tfvars`, `staging.tfvars`, `prod.tfvars`). + - Apply your Terraform configuration for each environment: + ```bash + terraform apply -var-file="dev.tfvars" + ``` +2. **Configure with Ansible:** + - Create separate inventory files or use a dynamic inventory based on Terraform outputs. + - Write a playbook (e.g., `nginx_setup.yml`) to install and configure Nginx. + - Execute the playbook for each environment. +3. **Document in `solution.md`:** + - Provide your environment-specific variable files, inventory files, and playbook. + - Summarize how Terraform outputs integrate with Ansible to manage multi-environment deployments. + +**Interview Questions:** +- How do you integrate Terraform outputs into Ansible inventories in a production workflow? +- What challenges might you face when managing multi-environment configurations, and how do you overcome them? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Fork or use your designated Ansible project repository and ensure all files (playbooks, roles, inventory files, `solution.md`, etc.) are committed and pushed to your fork. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `ansible-challenge`) to the main repository. + - **Title:** + ``` + Week 9 Challenge - Ansible Automation Challenge + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Submit Your Documentation:** + - **Important:** Place your `solution.md` file in the Week 9 (Ansible) task folder of the 90DaysOfDevOps repository. + +4. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Ansible challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., dynamic inventory, multi-tier orchestration, advanced Vault usage, and Terraform-Ansible integration). + - Use the hashtags: **#90DaysOfDevOps #Ansible #DevOps #InterviewPrep** + - Optionally, provide links to your fork or blog posts detailing your journey. + +--- + +## TrainWithShubham Resources for Ansible + +- **[Ansible Short Notes](https://www.trainwithshubham.com/products/Ansible-Short-Notes-64ad5f72b308530823e2c036)** +- **[Ansible One-Shot Video](https://youtu.be/4GwafiGsTUM?si=gqlIsNrfAv495WGj)** +- **[Multi-env setup blog](https://trainwithshubham.blog/devops-project-multi-environment-infrastructure-with-terraform-and-ansible/)** + +--- + +## Additional Resources + +- **[Ansible Official Documentation](https://docs.ansible.com/)** +- **[Ansible Modules Documentation](https://docs.ansible.com/ansible/latest/modules/modules_by_category.html)** +- **[Ansible Galaxy](https://galaxy.ansible.com/)** +- **[Ansible Best Practices](https://docs.ansible.com/ansible/latest/user_guide/playbooks_best_practices.html)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. From a0cb9f0623fea5e266384fc0391dfe8776dd330b Mon Sep 17 00:00:00 2001 From: Amitabh-DevOps Date: Fri, 28 Mar 2025 09:40:39 +0530 Subject: [PATCH 15/33] Added task for week-10 : Observability --- 2025/observability/README.md | 185 +++++++++++++++++++++++++++++++++++ 1 file changed, 185 insertions(+) create mode 100644 2025/observability/README.md diff --git a/2025/observability/README.md b/2025/observability/README.md new file mode 100644 index 0000000000..3363f243d3 --- /dev/null +++ b/2025/observability/README.md @@ -0,0 +1,185 @@ +# Week 10: Observability Challenge with Prometheus and Grafana on KIND/EKS + +This challenge is part of the 90DaysOfDevOps program and focuses on solving advanced, production-grade observability scenarios using Prometheus and Grafana. You will deploy, configure, and fine-tune monitoring and alerting systems on a KIND cluster, and as a bonus, monitor and log an AWS EKS cluster. This exercise is designed to push your skills with advanced configurations, custom queries, dynamic dashboards, and robust alerting mechanisms, while preparing you for technical interviews. + +**Important:** +1. Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and implement all tasks on your fork. +2. Document all steps, commands, screenshots, and observations in a file named `solution.md` within your fork. +3. Submit your `solution.md` file in the Week 10 (Observability) task folder of the 90DaysOfDevOps repository. + +--- + +## Task 1: Setup a KIND Cluster for Observability + +**Real-World Scenario:** +Simulate a production-like Kubernetes environment locally by creating a KIND cluster to serve as the foundation for your monitoring setup. + +**Steps:** +1. **Install KIND:** + - Follow the official KIND installation guide. +2. **Create a KIND Cluster:** + - Run: + ```bash + kind create cluster --name observability-cluster + ``` +3. **Verify the Cluster:** + - Run `kubectl get nodes` and capture the output. +4. **Document in `solution.md`:** + - Include installation steps, the commands used, and output from `kubectl get nodes`. + +**Interview Questions:** +- What are the benefits and limitations of using KIND for production-like testing? +- How can you simulate production scenarios using a local KIND cluster? + +--- + +## Task 2: Deploy Prometheus on KIND with Advanced Configurations + +**Real-World Scenario:** +Deploy Prometheus on your KIND cluster with a custom configuration that includes advanced scrape settings and relabeling rules to ensure high-quality metric collection. + +**Steps:** +1. **Create a Custom Prometheus Configuration:** + - Write a `prometheus.yml` with custom scrape configurations targeting cluster components (e.g., kube-state-metrics, Node Exporter) and advanced relabeling rules to clean up metric labels. +2. **Deploy Prometheus:** + - Deploy Prometheus using a Kubernetes Deployment or via a Helm chart. +3. **Verify and Tune:** + - Access the Prometheus UI to verify that metrics are being scraped as expected. + - Adjust relabeling rules and scrape intervals to optimize performance. +4. **Document in `solution.md`:** + - Include your `prometheus.yml` and screenshots of the Prometheus UI showing active targets and effective relabeling. + +**Interview Questions:** +- How do advanced relabeling rules refine metric collection in Prometheus? +- What performance issues might you encounter when scraping targets on a KIND cluster, and how would you address them? + +--- + +## Task 3: Deploy Grafana and Build Production-Grade Dashboards + +**Real-World Scenario:** +Deploy Grafana on your KIND cluster and configure it to use Prometheus as a data source. Then, create dashboards that reflect real production metrics, including custom queries and complex visualizations. + +**Steps:** +1. **Deploy Grafana:** + - Create a Kubernetes Deployment and Service for Grafana. +2. **Configure the Data Source:** + - In the Grafana UI, add Prometheus as a data source. +3. **Design Production Dashboards:** + - Create dashboards with panels that display key metrics (e.g., CPU, memory, disk I/O, network latency) using advanced PromQL queries. + - Customize panel visualizations (e.g., graphs, tables, heatmaps) to present data effectively. +4. **Document in `solution.md`:** + - Include configuration details, screenshots of dashboards, and an explanation of the queries and visualization choices. + +**Interview Questions:** +- What factors are critical when designing dashboards for production monitoring? +- How do you optimize PromQL queries for performance and clarity in Grafana? + +--- + +## Task 4: Configure Alerting and Notification Rules + +**Real-World Scenario:** +Establish robust alerting to detect critical issues (e.g., resource exhaustion, node failures) and notify the operations team immediately. + +**Steps:** +1. **Define Alerting Rules:** + - Add alerting rules in `prometheus.yml` or configure Prometheus Alertmanager for specific conditions. +2. **Configure Notification Channels:** + - Set up Grafana (or Alertmanager) to send notifications via email, Slack, or another channel. +3. **Test Alerts:** + - Simulate alert conditions (e.g., by temporarily reducing resources) to verify that notifications are sent. +4. **Document in `solution.md`:** + - Include your alerting configuration, screenshots of triggered alerts, and a brief rationale for chosen thresholds. + +**Interview Questions:** +- How do you design effective alerting rules to minimize false positives in production? +- What challenges do you face in configuring notifications for a dynamic environment? + +--- + +## Task 5: Deploy Node Exporter for Enhanced System Metrics + +**Real-World Scenario:** +Enhance system monitoring by deploying Node Exporter on your KIND cluster to collect detailed metrics such as CPU, memory, disk, and network usage, which are critical for troubleshooting production issues. + +**Steps:** +1. **Deploy Node Exporter:** + - Create a Deployment or DaemonSet to deploy Node Exporter across all nodes in your KIND cluster. +2. **Verify Metrics Collection:** + - Ensure Node Exporter endpoints are correctly scraped by Prometheus. +3. **Document in `solution.md`:** + - Include your Node Exporter YAML configuration and screenshots showing metrics collected in Prometheus. + - Explain the importance of system-level metrics in production monitoring. + +**Interview Questions:** +- What additional system metrics does Node Exporter provide that are crucial for production? +- How would you integrate Node Exporter metrics into your existing Prometheus setup? + +--- + +## Bonus Task: Monitor and Log an AWS EKS Cluster + +**Real-World Scenario:** +For an added challenge, provision or use an existing AWS EKS cluster and set up Prometheus and Grafana to monitor and log its performance. This task simulates the observability of a production cloud environment. + +**Steps:** +1. **Provision an EKS Cluster:** + - Use Terraform to deploy an EKS cluster (or leverage an existing one) and document key configuration settings. +2. **Deploy Prometheus and Grafana on EKS:** + - Configure Prometheus with appropriate scrape targets for the EKS cluster. + - Deploy Grafana and integrate it with Prometheus. +3. **Integrate Logging (Optional):** + - Optionally, configure a logging solution (e.g., Fluentd or CloudWatch) to capture EKS logs. +4. **Document in `solution.md`:** + - Summarize your EKS provisioning steps, Prometheus and Grafana configurations, and any logging integration. + - Explain how monitoring and logging improve observability in a cloud environment. + +**Interview Questions:** +- What are the key challenges of monitoring an EKS cluster versus a local KIND cluster? +- How would you integrate logging with monitoring tools to ensure comprehensive observability? + +--- + +## How to Submit + +1. **Push Your Final Work to GitHub:** + - Fork the [online_shop repository](https://github.com/Amitabh-DevOps/online_shop) and ensure all files (Prometheus and Grafana configurations, Node Exporter YAML, Terraform files for the bonus task, `solution.md`, etc.) are committed and pushed to your fork. + +2. **Create a Pull Request (PR):** + - Open a PR from your branch (e.g., `observability-challenge`) to the main repository. + - **Title:** + ``` + Week 10 Challenge - Observability Challenge (Prometheus & Grafana on KIND/EKS) + ``` + - **PR Description:** + - Summarize your approach, list key commands/configurations, and include screenshots or logs as evidence. + +3. **Submit Your Documentation:** + - **Important:** Place your `solution.md` file in the Week 10 (Observability) task folder of the 90DaysOfDevOps repository. + +4. **Share Your Experience on LinkedIn:** + - Write a post summarizing your Observability challenge experience. + - Include key takeaways, challenges faced, and insights (e.g., KIND/EKS setup, advanced configurations, dashboard creation, alerting strategies, and Node Exporter integration). + - Use the hashtags: **#90DaysOfDevOps #Prometheus #Grafana #KIND #EKS #Observability #DevOps #InterviewPrep** + - Optionally, provide links to your repository or blog posts detailing your journey. + +--- + +## TrainWithShubham Resources for Observability + +- **[Prometheus & Grafana One-Shot Video](https://youtu.be/DXZUunEeHqM?si=go1m-THyng7Ipyu6)** + +--- + +## Additional Resources + +- **[Prometheus Official Documentation](https://prometheus.io/docs/)** +- **[Grafana Official Documentation](https://grafana.com/docs/)** +- **[Alertmanager Documentation](https://prometheus.io/docs/alerting/latest/alertmanager/)** +- **[Kubernetes Monitoring with Prometheus](https://kubernetes.io/docs/tasks/debug-application-cluster/resource-metrics-pipeline/)** +- **[Grafana Dashboards](https://grafana.com/grafana/dashboards/)** + +--- + +Complete these tasks, answer the interview questions in your documentation, and use your work as a reference to prepare for real-world DevOps challenges and technical interviews. From 15be8bb52b1d08213393687b83ecf3205447f590 Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Fri, 23 Jan 2026 17:02:14 +0530 Subject: [PATCH 16/33] chore: added 2026 Day 1 tasks --- .gitignore | 41 ++++++++++++ 2026/day-01/README.md | 99 +++++++++++++++++++++++++++++ CONTRIBUTING.md | 131 ++++----------------------------------- LICENSE | 21 +++++++ README.md | 73 ++++------------------ scripts/generate_days.py | 16 +++++ scripts/generate_days.sh | 7 +++ 7 files changed, 207 insertions(+), 181 deletions(-) create mode 100644 .gitignore create mode 100644 2026/day-01/README.md create mode 100644 LICENSE create mode 100644 scripts/generate_days.py create mode 100755 scripts/generate_days.sh diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000000..7596df90c5 --- /dev/null +++ b/.gitignore @@ -0,0 +1,41 @@ +# OS +.DS_Store +Thumbs.db + +# Logs +*.log +*.pid + +# Environment +.env +.env.* + +# Python +__pycache__/ +*.pyc +.venv/ + +# Node +node_modules/ + +# Build +build/ +dist/ + +# Terraform +.terraform/ +*.tfstate +*.tfstate.* +crash.log + +# Kubernetes +.kube/ + +# IDE +.vscode/ +.idea/ + +# Caches +.cache/ +.pytest_cache/ +coverage/ diff --git a/2026/day-01/README.md b/2026/day-01/README.md new file mode 100644 index 0000000000..118d4e7f4e --- /dev/null +++ b/2026/day-01/README.md @@ -0,0 +1,99 @@ +# Day 01 – Introduction to DevOps and Cloud + +## Task +Today’s goal is to **set the foundation for your DevOps journey**. + +You will create a **90-day personal DevOps learning plan** that clearly defines: +- What is your understanding of DevOps and Cloud Engineering? +- Why you are starting learning DevOps & Cloud? +- Where do you want to reach? +- How you will stay consistent every single day? + +This is not a generic plan. +This is your **career execution blueprint** for the next 90 days. + +--- + +## Expected Output +By the end of today, you should have: + +- A markdown file named: + `learning-plan.md` + +or + +- A hand written plan for the next 90 Days (Recommended) + + +The file/note should clearly reflect your intent, discipline, and seriousness toward becoming a DevOps engineer. + +--- + +## Guidelines +Follow these rules while creating your plan: + +- Mention your **current level** + (student / fresher / working professional / non-IT background, etc.) +- Define **3 clear goals** for the next 90 days + (example: deploy a production-grade application on Kubernetes) +- Define **3 core DevOps skills** you want to build + (example: Linux troubleshooting, CI/CD pipelines, Kubernetes debugging) +- Allocate a **weekly time budget** + (example: 2–2.5 hours per day on weekdays, 4-6 hours weekends) +- Keep the document **under 1 page** +- Be honest and realistic; consistency matters more than perfection + +--- + +## Resources +You may refer to: + +- TrainWithShubham [course curriculum](https://english.trainwithshubham.com/JOSH_BATCH_10_Syllabus_v1.pdf) +- TrainWithShubham DevOps [roadmap](https://docs.google.com/spreadsheets/d/1eE-NhZQFr545LkP4QNhTgXcZTtkMFeEPNyVXAflXia0/edit?gid=2073716385#gid=2073716385) +- Your own past experience and career aspirations + +Avoid over-researching today. The focus is **clarity**, not depth. + +--- + +## Why This Matters for DevOps +DevOps engineers succeed not just because of tools, but because of: + +- Discipline +- Ownership +- Long-term thinking +- Ability to execute consistently + +In real jobs, no one tells you exactly what to do every day. +This task trains you to **take ownership of your own growth**, just like a real DevOps engineer. + +A clear plan: +- Reduces confusion +- Prevents burnout +- Keeps you focused during tough days + +--- + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to the `2026/day-01/` folder +3. Add your `learning-plan.md` file +4. Commit and push your changes to your fork + +--- + +## Learn in Public +Share your Day 01 progress on LinkedIn: + +- Post 2–3 lines on why you’re starting **#90DaysOfDevOps** +- Share one goal from your learning plan +- Optional: screenshot of your markdown file or a professional picture + +Use hashtags: +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham + + +Happy Learning +**TrainWithShubham** \ No newline at end of file diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 3376813da5..764c626f00 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,125 +1,18 @@ # Contributing Guidelines -Thank you for your interest in contributing to our project. Whether it's a bug report, new feature, correction, or additional -documentation, we greatly value feedback and contributions from our community. +Thank you for contributing to #90DaysOfDevOps. -Please read through this document before submitting any issues or pull requests to ensure we have all the necessary -information to effectively respond to your bug report or contribution. +## Quick Start +- Fork the repository and create a branch. +- Complete the task in the correct `2026/day-XX` folder. +- Commit with a clear message (example: `day-14: git fundamentals notes`). -## Reporting Bugs, Features, and Enhancements -We welcome you to use the GitHub issue tracker to report bugs or suggest features and enhancements. +## Pull Request Checklist +- Only update the day(s) you worked on. +- Ensure filenames match the Expected Output exactly. +- Keep content concise and practical. +- No emojis in README files. -When filing an issue, please check existing open, or recently closed, issues to make sure someone else hasn't already -reported the issue. - -Please try to include as much information as you can. Details like these are incredibly useful: - -* A reproducible test case or series of steps. -* Any modifications you've made relevant to the bug. -* Anything unusual about your environment or deployment. - -## Contributing via Pull Requests - -Contributions via pull requests are appreciated. Before sending us a pull request, please ensure that: - -1. You [open a discussion](https://github.com/MichaelCade/90DaysOfDevOps/discussions) to discuss any significant work with the maintainer(s). -2. You open an issue and link your pull request to the issue for context. -3. You are working against the latest source on the `main` branch. -4. You check existing open, and recently merged, pull requests to make sure someone else hasn't already addressed the problem. - -To send us a pull request, please: - -1. Fork the repository. -2. Modify the source; please focus on the **specific** change you are contributing. -3. Ensure local tests pass. -4. Updated the documentation, if required. -4. Commit to your fork [using a clear commit messages](http://chris.beams.io/posts/git-commit/). We ask you to please use [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0/). -5. Send us a pull request, answering any default questions in the pull request. -6. Pay attention to any automated failures reported in the pull request, and stay involved in the conversation. - -GitHub provides additional document on [forking a repository](https://help.github.com/articles/fork-a-repo/) and -[creating a pull request](https://help.github.com/articles/creating-a-pull-request/). - -### Contributor Flow - -This is a rough outline of what a contributor's workflow looks like: - -- Create a topic branch from where you want to base your work. -- Make commits of logical units. -- Make sure your commit messages are [in the proper format](http://chris.beams.io/posts/git-commit/). -- Push your changes to a topic branch in your fork of the repository. -- Submit a pull request. - -Example: - -``` shell -git remote add upstream https://github.com/vmware-samples/packer-examples-for-vsphere.git -git checkout -b my-new-feature main -git commit -s -a -git push origin my-new-feature -``` - -### Staying In Sync With Upstream - -When your branch gets out of sync with the 90DaysOfDevOps/main branch, use the following to update: - -``` shell -git checkout my-new-feature -git fetch -a -git pull --rebase upstream main -git push --force-with-lease origin my-new-feature -``` - -### Updating Pull Requests - -If your pull request fails to pass or needs changes based on code review, you'll most likely want to squash these changes into -existing commits. - -If your pull request contains a single commit or your changes are related to the most recent commit, you can simply amend the commit. - -``` shell -git add . -git commit --amend -git push --force-with-lease origin my-new-feature -``` - -If you need to squash changes into an earlier commit, you can use: - -``` shell -git add . -git commit --fixup -git rebase -i --autosquash main -git push --force-with-lease origin my-new-feature -``` - -Be sure to add a comment to the pull request indicating your new changes are ready to review, as GitHub does not generate a notification when you `git push`. - -### Formatting Commit Messages - -We follow the conventions on [How to Write a Git Commit Message](http://chris.beams.io/posts/git-commit/). - -Be sure to include any related GitHub issue references in the commit message. - -See [GFM syntax](https://guides.github.com/features/mastering-markdown/#GitHub-flavored-markdown) for referencing issues and commits. - -## Reporting Bugs and Creating Issues - -When opening a new issue, try to roughly follow the commit message format conventions above. - -## Finding Contributions to Work On - -Looking at the existing issues is a great way to find something to contribute on. If you have an idea you'd like to discuss, [open a discussion](https://github.com/MichaelCade/90DaysOfDevOps/discussions). - -## License - -Shield: [![CC BY-NC-SA 4.0][cc-by-nc-sa-shield]][cc-by-nc-sa] - -This work is licensed under a -[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. - -[![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] - -[cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ -[cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png -[cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg +## Code of Conduct +Be respectful, supportive, and helpful to others in the community. diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000000..58006f3ebf --- /dev/null +++ b/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2026 TrainWithShubham + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/README.md b/README.md index c075658e1a..b6583aad0b 100644 --- a/README.md +++ b/README.md @@ -1,66 +1,15 @@ -# #90DaysOfDevOps Challenge +# #90DaysOfDevOps 2026 -## Learn, Upskill, Grow with the Community +Welcome to TrainWithShubham's #90DaysOfDevOps challenge. This repository contains the 2026 day-by-day tasks, aligned to the live class schedule and designed for daily practice. -Join our DevOps community challenge and embark on a 90-day journey to become a better DevOps practitioner. This repository serves as an open invitation to all DevOps enthusiasts who are looking to enhance their skills and knowledge. By participating in this challenge, you will have the opportunity to learn from others in the community, collaborate with like-minded individuals, and ultimately strengthen your DevOps abilities. +## How to Participate +- Fork the repository. +- Complete one day at a time in `2026/day-XX`. +- Commit your deliverable in the day folder. +- Open a pull request with your updates. -Let's come together to grow and achieve new heights in DevOps! +## Schedule Note +Day 01 starts on 2026-01-24 (Asia/Kolkata). Live class days contain the main challenge for the topic. Non-live days focus on practice and reinforcement. -📖 **Discover More in Our Detailed Table of Contents!** Explore the richness of our content and find what you're looking for efficiently. Check out our [TOC here](./TOC.md). - -## Steps: - -- Fork[https://github.com/LondheShubham153/90DaysOfDevOps/fork] the Repo. -- Learn Everyday and add your learnings in the day wise folders. -- Check out what others are Learning and help/learn from them. -- Showcase your learnings on LinkedIn - -## These are our community Links - - -   - - - -   - - - -   - - - -   - - - -  - - - - -  - -## Events - -### YouTube Live Announcement: - - -   - -### YouTube Playlist for DevOps: - - -   - -### DevOps Course: - - -  - -## Thanks to all contributors ❤ - - - - +## Start Here +Go to `2026/day-01/README.md` and follow the task. diff --git a/scripts/generate_days.py b/scripts/generate_days.py new file mode 100644 index 0000000000..a186169445 --- /dev/null +++ b/scripts/generate_days.py @@ -0,0 +1,16 @@ +from pathlib import Path + +root = Path(__file__).resolve().parents[1] +base = root / "2026" +base.mkdir(parents=True, exist_ok=True) + +total_days = 90 + +# Generate folders and ensure README.md exists without overwriting. +for i in range(total_days): + day_num = i + 1 + day_dir = base / f"day-{day_num:02d}" + day_dir.mkdir(parents=True, exist_ok=True) + readme = day_dir / "README.md" + if not readme.exists(): + readme.write_text("") diff --git a/scripts/generate_days.sh b/scripts/generate_days.sh new file mode 100755 index 0000000000..c83c5f9b8d --- /dev/null +++ b/scripts/generate_days.sh @@ -0,0 +1,7 @@ +#!/usr/bin/env bash +set -euo pipefail + +# Regenerate 2026 day folders if needed. +# Usage: ./scripts/generate_days.sh + +python3 scripts/generate_days.py From 974b75e150ea7b2195544facff0aa010f3326f56 Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Fri, 23 Jan 2026 17:07:09 +0530 Subject: [PATCH 17/33] chore: update readme --- README.md | 137 +++++++++++++++++++++++++++++++++++++++++++++++++----- 1 file changed, 126 insertions(+), 11 deletions(-) diff --git a/README.md b/README.md index b6583aad0b..aff67ddb9c 100644 --- a/README.md +++ b/README.md @@ -1,15 +1,130 @@ -# #90DaysOfDevOps 2026 +# 🚀 90DaysOfDevOps +### Learn • Build • Practice • Become Job-Ready -Welcome to TrainWithShubham's #90DaysOfDevOps challenge. This repository contains the 2026 day-by-day tasks, aligned to the live class schedule and designed for daily practice. +Welcome to **90DaysOfDevOps**, a structured and hands-on DevOps challenge by **TrainWithShubham**. -## How to Participate -- Fork the repository. -- Complete one day at a time in `2026/day-XX`. -- Commit your deliverable in the day folder. -- Open a pull request with your updates. +This repository is designed to help you **build real DevOps skills step by step in 90 days** — not by watching endless videos, but by **doing daily tasks**, building projects, and thinking like a **production-ready DevOps engineer**. -## Schedule Note -Day 01 starts on 2026-01-24 (Asia/Kolkata). Live class days contain the main challenge for the topic. Non-live days focus on practice and reinforcement. +This is not a theory-heavy course. +This is a **discipline + execution challenge**. -## Start Here -Go to `2026/day-01/README.md` and follow the task. +--- + +## 🎯 What is #90DaysOfDevOps? + +**#90DaysOfDevOps** is a **day-wise DevOps learning challenge** where: + +- Every day has **one clear task** +- Every task has a **real-world DevOps outcome** +- Every learner builds a **public GitHub proof of work** +- Every concept is reinforced through **hands-on practice** +- Learning is aligned with **live classes and recordings** + +By the end of 90 days, you will have: +- Strong DevOps fundamentals +- Multiple mini-projects +- One end-to-end DevOps capstone project +- A GitHub profile that clearly shows consistency +- Confidence to handle DevOps interviews and production systems + +--- + +## 🧠 Who Is This For? + +This challenge is ideal for: + +- Students and freshers entering DevOps or Cloud +- Working professionals switching to DevOps / SRE / Cloud roles +- Developers who want to understand infrastructure and CI/CD +- Anyone who believes **consistency beats talent** + +No prior DevOps experience is required. +**Commitment is mandatory.** + +--- + +## 🗂 Repository Structure + +``` +90DaysOfDevOps/ +│ +├── README.md +├── CONTRIBUTING.md +├── LICENSE +├── .gitignore +│ +├── scripts/ +│ └── helper-scripts.sh +│ +├── day-01/ +│ └── README.md +├── day-02/ +│ └── README.md +├── ... +├── day-90/ +│ └── README.md +``` + +--- + +## 📅 How the Challenge Works + +- **One day = one task** +- Tasks are aligned with **live classes** +- Live class days focus on **core concepts** +- Weekdays focus on **practice and reinforcement** +- Daily commits are encouraged + +Even **30–60 minutes per day** is enough if done honestly. + +--- + +## 🛠 What You Will Learn + +- Linux fundamentals and troubleshooting +- Shell scripting and automation +- Networking basics for DevOps +- Git and GitHub workflows +- Docker and containerization +- AWS core and advanced services +- CI/CD using Jenkins, GitHub Actions, GitLab +- DevSecOps fundamentals +- Kubernetes, Helm, ArgoCD +- Terraform and Ansible +- Observability with Grafana, Prometheus, OpenTelemetry +- End-to-end DevOps project + +--- + +## 📦 How to Participate + +1. Fork this repository +2. Clone your fork +3. Navigate to the current `day-XX` folder +4. Complete the task +5. Commit and push your work + +--- + +## 🌍 Learn in Public + +Share your progress on LinkedIn: + +``` +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham +``` + +--- + +## ❤️ Final Note + +DevOps is not about tools. +It is about **ownership, reliability, and consistency**. + +One day at a time. +One commit at a time. + +Happy Learning +**TrainWithShubham** From 5e8a5d38932c224e91281a64ad34b865f78f1002 Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Sun, 25 Jan 2026 08:17:48 +0530 Subject: [PATCH 18/33] chore: update day 2 task --- 2026/day-02/README.md | 85 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 85 insertions(+) create mode 100644 2026/day-02/README.md diff --git a/2026/day-02/README.md b/2026/day-02/README.md new file mode 100644 index 0000000000..d8910d0b82 --- /dev/null +++ b/2026/day-02/README.md @@ -0,0 +1,85 @@ +# Day 02 – Linux Architecture, Processes, and systemd + +## Task +Today’s goal is to **understand how Linux works under the hood**. + +You will create a short note that explains: +- The core components of Linux (kernel, user space, init/systemd) +- How processes are created and managed +- What systemd does and why it matters + +This is the foundation for all troubleshooting you will do as a DevOps engineer. + +--- + +## Expected Output +By the end of today, you should have: + +- A markdown file named: + `linux-architecture-notes.md` + +or + +- A hand written set of notes (Recommended) + +Your notes should be clear enough that someone new to Linux can follow them. + +--- + +## Guidelines +Follow these rules while creating your notes: + +- Explain **process states** (running, sleeping, zombie, etc.) +- List **5 commands** you would use daily +- Keep it **short and practical** (under 1 page) +- Use bullet points and short headings + +--- + +## Resources +You may refer to: + +- Linux `man` pages (`ps`, `top`, `systemctl`) +- Official systemd docs +- Your class notes + +Avoid copying/pasting AI Generated content. +Focus on understanding. + +--- + +## Why This Matters for DevOps +Linux is the base OS for almost every production system. + +If you know how processes and systemd work, you can: +- Debug crashed services faster +- Fix CPU/memory issues +- Understand logs and service restarts confidently + +This knowledge saves hours during incidents. + +--- + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to the `2026/day-02/` folder +3. Add your `linux-architecture-notes.md` file +4. Commit and push your changes to your fork + +--- + +## Learn in Public +Share your Day 02 progress on LinkedIn: + +- Post 2–3 lines on what you learned about Linux internals +- Share one systemd command you found useful +- Optional: screenshot of your notes + +Use hashtags: +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham + + +Happy Learning +**TrainWithShubham** \ No newline at end of file From ddc9858c17b2a16585b6ec1067065042e045c991 Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Mon, 26 Jan 2026 11:13:59 +0530 Subject: [PATCH 19/33] Added Day 3 tasks --- 2026/day-03/README.md | 82 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 82 insertions(+) create mode 100644 2026/day-03/README.md diff --git a/2026/day-03/README.md b/2026/day-03/README.md new file mode 100644 index 0000000000..62d10ab962 --- /dev/null +++ b/2026/day-03/README.md @@ -0,0 +1,82 @@ +# Day 03 – Linux Commands, Logs, and Networking Commands + +## Task +Today’s goal is to **build your Linux command confidence**. + +You will create a cheat sheet of commands focused on: +- Process management +- Log analysis +- Networking troubleshooting + +This is the command toolkit you will reuse for years. + +--- + +## Expected Output +By the end of today, you should have: + +- A markdown file named: + `linux-commands-cheatsheet.md` + +or + +- A hand written cheat sheet (Recommended) + +Your cheat sheet should be easy to scan during real troubleshooting. + +--- + +## Guidelines +Follow these rules while creating your cheat sheet: + +- Include **at least 20 commands** with one‑line usage notes +- Add **3 networking commands** (`ping`, `ip addr`, `dig`, `curl`, etc.) +- Group commands by category +- Keep it concise and readable + +--- + +## Resources +You may refer to: + +- Linux `man` pages +- Your class notes +- Reliable Linux command references + +Don’t copy long lists. Focus on commands you understand. + +--- + +## Why This Matters for DevOps +Real production issues are solved at the command line. + +The faster you can inspect logs and network issues, the faster you can: +- Restore service +- Reduce downtime +- Gain trust as an operator + +--- + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to the `2026/day-03/` folder +3. Add your `linux-commands-cheatsheet.md` file +4. Commit and push your changes to your fork + +--- + +## Learn in Public +Share your Day 03 progress on LinkedIn: + +- Post 2–3 lines on your favorite Linux commands +- Share one log command and one networking command +- Optional: screenshot of your cheat sheet + +Use hashtags: +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham + + +Happy Learning +**TrainWithShubham** \ No newline at end of file From eff5af96d631ba24738f2603dc93902c5188e37e Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Mon, 26 Jan 2026 11:15:44 +0530 Subject: [PATCH 20/33] Added Day 3 tasks --- 2026/day-03/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/2026/day-03/README.md b/2026/day-03/README.md index 62d10ab962..5ccf2ccc98 100644 --- a/2026/day-03/README.md +++ b/2026/day-03/README.md @@ -1,11 +1,11 @@ -# Day 03 – Linux Commands, Logs, and Networking Commands +# Day 03 – Linux Commands Practice ## Task Today’s goal is to **build your Linux command confidence**. You will create a cheat sheet of commands focused on: - Process management -- Log analysis +- File system - Networking troubleshooting This is the command toolkit you will reuse for years. From c1020275e4caa479272d200e4ea2bf7513e5dd62 Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Tue, 27 Jan 2026 10:06:46 +0530 Subject: [PATCH 21/33] Added Day 4 tasks --- 2026/day-04/README.md | 86 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 86 insertions(+) create mode 100644 2026/day-04/README.md diff --git a/2026/day-04/README.md b/2026/day-04/README.md new file mode 100644 index 0000000000..b884376fb6 --- /dev/null +++ b/2026/day-04/README.md @@ -0,0 +1,86 @@ +# Day 04 – Linux Practice: Processes and Services + +## Task +Today’s goal is to **practice Linux fundamentals with real commands**. + +You will create a short practice note by actually running basic commands and capturing what you see: +- Check running processes +- Inspect one systemd service +- Capture a small troubleshooting flow + +This is hands-on. Keep it simple and focused on fundamentals. + +--- + +## Expected Output +By the end of today, you should have: + +- A markdown file named: + `linux-practice.md` + +or + +- A hand written practice log (Recommended) + +Your note should show what you actually ran on your system. + +--- + +## Guidelines +Follow these rules while creating your practice note: + +- Run and record output for **at least 6 commands** +- Include **2 process commands** (`ps`, `top`, `pgrep`, etc.) +- Include **2 service commands** (`systemctl status`, `systemctl list-units`, etc.) +- Include **2 log commands** (`journalctl -u `, `tail -n 50`, etc.) +- Pick **one service on your system** (example: `ssh`, `cron`, `docker`) and inspect it +- Add **one mini troubleshooting scenario** and the exact steps you would take +- Keep it **simple and actionable** + +Suggested structure for `linux-practice.md`: +- Process checks +- Service checks +- Log checks +- Mini troubleshooting steps + +--- + +## Resources +You may refer to: + +- Your notes from Day 02 and Day 03 +- Linux `man` pages +- Your class notes + +--- + +## Why This Matters for DevOps +Hands‑on practice builds speed and confidence. + +When issues happen in production, you won’t have time to search for basic commands. +This day helps you build muscle memory with Linux fundamentals. + +--- + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to the `2026/day-04/` folder +3. Add your `linux-practice.md` file +4. Commit and push your changes to your fork + +--- + +## Learn in Public +Share your Day 04 progress on LinkedIn: + +- Post 2–3 lines on the Linux commands you practiced +- Share one service you inspected and what you learned +- Optional: screenshot of your practice note + +Use hashtags: +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham + +Happy Learning +**TrainWithShubham** From bb55d28640aae42283aa2c9f01a01a514b002390 Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Tue, 27 Jan 2026 10:13:51 +0530 Subject: [PATCH 22/33] Added Day 4 tasks --- 2026/day-04/README.md | 1 - 1 file changed, 1 deletion(-) diff --git a/2026/day-04/README.md b/2026/day-04/README.md index b884376fb6..12fd91c7cf 100644 --- a/2026/day-04/README.md +++ b/2026/day-04/README.md @@ -34,7 +34,6 @@ Follow these rules while creating your practice note: - Include **2 service commands** (`systemctl status`, `systemctl list-units`, etc.) - Include **2 log commands** (`journalctl -u `, `tail -n 50`, etc.) - Pick **one service on your system** (example: `ssh`, `cron`, `docker`) and inspect it -- Add **one mini troubleshooting scenario** and the exact steps you would take - Keep it **simple and actionable** Suggested structure for `linux-practice.md`: From e09949e9483f68b48808278424508c90b358648c Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Wed, 28 Jan 2026 13:34:30 +0530 Subject: [PATCH 23/33] added day 5 tasks --- 2026/day-05/README.md | 102 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 102 insertions(+) create mode 100644 2026/day-05/README.md diff --git a/2026/day-05/README.md b/2026/day-05/README.md new file mode 100644 index 0000000000..03b0b49f02 --- /dev/null +++ b/2026/day-05/README.md @@ -0,0 +1,102 @@ +# Day 05 – Linux Troubleshooting Drill: CPU, Memory, and Logs + +## Task +Today’s goal is to **run a focused troubleshooting drill**. + +You will pick a running process/service on your system and: +- Capture a quick health snapshot (CPU, memory, disk, network) +- Trace logs for that service +- Write a **mini runbook** describing what you did and what you’d do next if things were worse + +This turns yesterday’s practice into a repeatable troubleshooting routine. + +### What’s a runbook? +A **runbook** is a short, repeatable checklist you follow during an incident: the exact commands you run, what you observed, and the next actions if the issue persists. Keep it concise so you can reuse it under pressure. + +--- + +## Expected Output +By the end of today, you should have: + +- A markdown file named: + `linux-troubleshooting-runbook.md` + +or + +- A hand written runbook (Recommended) + +Your runbook should include both the commands you ran and brief interpretations. + +--- + +## Guidelines +Follow these rules while creating your runbook: + +- Run and record output for **at least 8 commands** (save snippets in your runbook) + - **Environment basics (2):** `uname -a`, `lsb_release -a` (or `cat /etc/os-release`) + - **Filesystem sanity (2):** create a throwaway folder and file, e.g., `mkdir /tmp/runbook-demo`, `cp /etc/hosts /tmp/runbook-demo/hosts-copy && ls -l /tmp/runbook-demo` + - **CPU / Memory (2):** `top`/`htop`/`ps -o pid,pcpu,pmem,comm -p `, `free -h`, `vm_stat` (mac) + - **Disk / IO (2):** `df -h`, `du -sh /var/log`, `iostat`/`vmstat`/`dstat` + - **Network (2):** `ss -tulpn`/`netstat -tulpn`, `curl -I `/`ping` + - **Logs (2):** `journalctl -u -n 50`, `tail -n 50 /var/log/.log` +- Choose **one target service/process** (e.g., `ssh`, `cron`, `docker`, your web app) and stick to it for the drill. +- For each command, add a 1–2 line note on what you observed (e.g., “CPU spikes to 80% when restarting”, “No recent errors in last 50 lines”). +- End with a **“If this worsens”** section listing 3 next steps you would take (ex: restart strategy, increase log verbosity, collect `strace`). +- Keep it concise and actionable (aim for ~1 page). + +Suggested structure for `linux-troubleshooting-runbook.md`: +- Target service / process +- Snapshot: CPU & Memory +- Snapshot: Disk & IO +- Snapshot: Network +- Logs reviewed +- Quick findings +- If this worsens (next steps) + +--- + +## Resources +You may refer to: + +- Notes from Day 02–04 +- Linux `man` pages (`top`, `ps`, `df`, `journalctl`, `ss/netstat`) +- Your class notes + +Avoid generic copy/paste. Use outputs from **your** machine. + +--- + +## Why This Matters for DevOps +Incidents rarely come with perfect clues. A fast, repeatable checklist saves minutes when services misbehave. + +This drill builds: +- Habit of capturing evidence before acting +- Confidence reading resource signals (CPU, memory, disk, network) +- Log-first mindset before restarts or escalations + +These habits reduce downtime and prevent guesswork in production. + +--- + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to the `2026/day-05/` folder +3. Add your `linux-troubleshooting-runbook.md` file +4. Commit and push your changes to your fork + +--- + +## Learn in Public +Share your Day 05 progress on LinkedIn: + +- Post 2–3 lines on the checks you ran and one insight +- Share the service you inspected and one “next step” from your runbook +- Optional: screenshot of your runbook + +Use hashtags: +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham + +Happy Learning +**TrainWithShubham** From e09ceef9b02adaa479ad634028cfc4fd1b1fb782 Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Thu, 29 Jan 2026 10:56:07 +0530 Subject: [PATCH 24/33] added day 6 tasks --- 2026/day-06/README.md | 93 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 93 insertions(+) create mode 100644 2026/day-06/README.md diff --git a/2026/day-06/README.md b/2026/day-06/README.md new file mode 100644 index 0000000000..12d4ff03fb --- /dev/null +++ b/2026/day-06/README.md @@ -0,0 +1,93 @@ +# Day 06 – Linux Fundamentals: Read and Write Text Files + +## Task +This is a **continuation of Day 05**, but much simpler. + +Today’s goal is to **practice basic file read/write** using only fundamental commands. + +You will create a small text file and practice: +- Creating a file +- Writing text to a file +- Appending new lines +- Reading the file back + +Keep it basic and repeatable. + +--- + +## Expected Output +By the end of today, you should have: + +- the new created files +- A markdown file named: + `file-io-practice.md` + +or + +- A hand written practice note (Recommended) + +Your note should include the commands you ran and what they did. + +--- + +## Guidelines +Follow these rules while creating your practice note: + +- Create a file named `notes.txt` +- Write 3 lines into the file using **redirection** (`>` and `>>`) +- Use **`cat`** to read the full file +- Use **`head`** and **`tail`** to read parts of the file +- Use **`tee`** once to write and display at the same time +- Keep it short (8–12 lines total in the file) + +Suggested command flow: +1. `touch notes.txt` +2. `echo "Line 1" > notes.txt` +3. `echo "Line 2" >> notes.txt` +4. `echo "Line 3" | tee -a notes.txt` +5. `cat notes.txt` +6. `head -n 2 notes.txt` +7. `tail -n 2 notes.txt` + +--- + +## Resources +Use these docs to understand the commands: + +- `touch` (create an empty file) +- `cat` (read full file) +- `head` and `tail` (read parts of a file) +- `tee` (write and display at the same time) + +--- + +## Why This Matters for DevOps +Reading and writing files is a daily task in DevOps. + +Logs, configs, and scripts are all text files. +If you can handle files quickly, you can debug and automate faster. + +--- + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to the `2026/day-06/` folder +3. Add your `file-io-practice.md` file +4. Commit and push your changes to your fork + +--- + +## Learn in Public +Share your Day 06 progress on LinkedIn: + +- Post 2–3 lines on what you learned about file read/write +- Share one command you will use often +- Optional: screenshot of your notes + +Use hashtags: +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham + +Happy Learning +**TrainWithShubham** From 07bee0c99c7f0b2d9d3f6c7d4c9b2660790a1b9b Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Thu, 29 Jan 2026 20:18:43 +0530 Subject: [PATCH 25/33] added day 7 task --- 2026/day-07/README.md | 1 + 1 file changed, 1 insertion(+) create mode 100644 2026/day-07/README.md diff --git a/2026/day-07/README.md b/2026/day-07/README.md new file mode 100644 index 0000000000..f648669db2 --- /dev/null +++ b/2026/day-07/README.md @@ -0,0 +1 @@ +# Day 07 – Linux Package Installer: Apt, Yum, Rpm \ No newline at end of file From 0be4b77527f0e09b032f72e350ae5a6f4e3ca7fd Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Fri, 30 Jan 2026 12:13:18 +0530 Subject: [PATCH 26/33] added day 7 task --- 2026/day-07/README.md | 257 +++++++++++++++++++++++++++++++++++++++++- 1 file changed, 256 insertions(+), 1 deletion(-) diff --git a/2026/day-07/README.md b/2026/day-07/README.md index f648669db2..613b241332 100644 --- a/2026/day-07/README.md +++ b/2026/day-07/README.md @@ -1 +1,256 @@ -# Day 07 – Linux Package Installer: Apt, Yum, Rpm \ No newline at end of file +# Day 07 – Linux File System Hierarchy & Scenario-Based Practice + +## Task +Today's goal is to **understand where things live in Linux** and **practice troubleshooting like a DevOps engineer**. + +You will create notes covering: +- Linux File System Hierarchy (the most important directories) +- Practice solving real-world scenarios step by step + +This consolidates your Linux fundamentals and prepares you for real-world troubleshooting. + +--- + +## Expected Output +By the end of today, you should have: + +- A markdown file named: + `day-07-linux-fs-and-scenarios.md` + +or + +- A hand written set of notes (Recommended) + +Your notes should have two sections: File System Hierarchy and Scenario Practice. + +--- + +## Guidelines + +### Part 1: Linux File System Hierarchy (30 minutes) + +Document the purpose of these **essential** directories: + +**Core Directories (Must Know):** +- `/` (root) - The starting point of everything +- `/home` - User home directories +- `/root` - Root user's home directory +- `/etc` - Configuration files +- `/var/log` - Log files (very important for DevOps!) +- `/tmp` - Temporary files + +**Additional Directories (Good to Know):** +- `/bin` - Essential command binaries +- `/usr/bin` - User command binaries +- `/opt` - Optional/third-party applications + +For each directory: +- Write 1-2 lines explaining what it contains +- Run `ls -l ` and note 1-2 files/folders you see +- Write one sentence: "I would use this when..." + +**Hands-on task:** +```bash +# Find the largest log file in /var/log +du -sh /var/log/* 2>/dev/null | sort -h | tail -5 + +# Look at a config file in /etc +cat /etc/hostname + +# Check your home directory +ls -la ~ +``` + +--- + +### Part 2: Scenario-Based Practice (40 minutes) + +**Important:** Focus on understanding the **troubleshooting flow**, not memorizing commands. Use the hints! + +--- + +#### SOLVED EXAMPLE: Understanding How to Approach Scenarios + +**Example Scenario: Check if a service is running** +``` +Question: How do you check if the 'nginx' service is running? +``` + +**My Solution (Step by step):** + +**Step 1:** Check service status +```bash +systemctl status nginx +``` +**Why this command?** It shows if the service is active, failed, or stopped + +**Step 2:** If service is not found, list all services +```bash +systemctl list-units --type=service +``` +**Why this command?** To see what services exist on the system + +**Step 3:** Check if service is enabled on boot +```bash +systemctl is-enabled nginx +``` +**Why this command?** To know if it will start automatically after reboot + +**What I learned:** Always check status first, then investigate based on what you see. + +--- + +Now try these scenarios yourself: + +--- + +**Scenario 1: Service Not Starting** +``` +A web application service called 'myapp' failed to start after a server reboot. +What commands would you run to diagnose the issue? +Write at least 4 commands in order. +``` + +**Hint:** +- First check: Is the service running or failed? +- Then check: What do the logs say? +- Finally check: Is it enabled to start on boot? + +**Commands to explore:** `systemctl status myapp`, `systemctl is-enabled myapp`, `journalctl -u myapp -n 50` + +**Resource:** Review Day 04 (Process and Services practice) + +**Template for your answer:** +``` +Step 1: [command] +Why: [one line explanation] + +Step 2: [command] +Why: [one line explanation] + +... +``` + +--- + +**Scenario 2: High CPU Usage** +``` +Your manager reports that the application server is slow. +You SSH into the server. What commands would you run to identify +which process is using high CPU? +``` + +**Hint:** +- Use a command that shows **live** CPU usage +- Look for processes sorted by CPU percentage +- Note the PID (Process ID) of the top process + +**Commands to explore:** `top` (press 'q' to quit), `htop`, `ps aux --sort=-%cpu | head -10` + +**Resource:** Review Day 05 (Troubleshooting Drill - CPU & Memory section) + +--- + +**Scenario 3: Finding Service Logs** +``` +A developer asks: "Where are the logs for the 'docker' service?" +The service is managed by systemd. +What commands would you use? +``` + +**Hint:** +- systemd services → logs are in journald +- Command pattern: `journalctl -u ` +- Use -n flag to limit number of lines +- Use -f flag to follow logs in real-time (like tail -f) + +**Commands to explore:** +```bash +# Check service status first +systemctl status ssh + +# View last 50 lines of logs +journalctl -u ssh -n 50 + +# Follow logs in real-time +journalctl -u ssh -f +``` + +**Resource:** Review Day 04 (Process and Services - Log checks section) + +--- + +**Scenario 4: File Permissions Issue** +``` +A script at /home/user/backup.sh is not executing. +When you run it: ./backup.sh +You get: "Permission denied" + +What commands would you use to fix this? +``` + +**Hint:** +- First: Check what permissions the file has +- Understand: Files need 'x' (execute) permission to run +- Fix: Add execute permission with chmod + +**Step-by-step solution structure:** +``` +Step 1: Check current permissions +Command: ls -l /home/user/backup.sh +Look for: -rw-r--r-- (notice no 'x' = not executable) + +Step 2: Add execute permission +Command: chmod +x /home/user/backup.sh + +Step 3: Verify it worked +Command: ls -l /home/user/backup.sh +Look for: -rwxr-xr-x (notice 'x' = executable) + +Step 4: Try running it +Command: ./backup.sh +``` + +**Resource:** Review Day 02 (File Permissions and Users Management) + +--- + +## Why This Matters for DevOps +Understanding the file system is critical for: +- Knowing where to find logs, configs, and binaries +- Troubleshooting deployment issues +- Writing automation scripts that work across systems + +Scenario-based practice prepares you for: +- Real production incidents +- DevOps interviews +- On-call troubleshooting under pressure + +These are questions you **will** face in interviews and during real incidents. + +--- + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to the `2026/day-07/` folder +3. Add your `day-07-linux-fs-and-scenarios.md` file +4. Commit and push your changes to your fork + +--- + +## Learn in Public +Share your Day 07 progress on LinkedIn: + +- Post 2–3 lines on what you learned about Linux file system +- Share one scenario you found challenging and how you solved it +- Optional: screenshot of your notes + +Use hashtags: +``` +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham +``` + +Happy Learning +**TrainWithShubham** From 5ff71f4809a2a0c5b858ccc04c7a0bf8c58b859a Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Sat, 31 Jan 2026 16:25:00 +0530 Subject: [PATCH 27/33] added day 8 task --- 2026/day-08/README.md | 146 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 146 insertions(+) create mode 100644 2026/day-08/README.md diff --git a/2026/day-08/README.md b/2026/day-08/README.md new file mode 100644 index 0000000000..443decbe16 --- /dev/null +++ b/2026/day-08/README.md @@ -0,0 +1,146 @@ +# Day 08 – Cloud Server Setup: Docker, Nginx & Web Deployment + +## Task +Today's goal is to **deploy a real web server on the cloud** and learn practical server management. + +You will: +- Launch a cloud instance (AWS EC2 or Utho) +- Connect via SSH +- Install Nginx +- Configure security groups for web access (port 80 by default for nginx) +- Extract and save logs to a file +- Verify your webpage is accessible from the internet + +This is real DevOps work - exactly what you'll do in production. + +--- + +## Expected Output +By the end of today, you should have: + +1. A markdown file named: `day-08-cloud-deployment.md` +2. Screenshots showing: + - SSH connection to your server + - Nginx welcome page accessible from browser + - Log file contents +3. The log file: `nginx-logs.txt` + +--- + +## Prerequisites +- AWS account (Free Tier) OR Utho account +- Basic understanding of Linux commands (Days 1-7) +- SSH client (Terminal on Mac/Linux, PuTTY on Windows) + +--- + +## Guidelines + +### Part 1: Launch Cloud Instance & SSH Access (15 minutes) + +**Step 1: Create a Cloud Instance** + + +**Step 2: Connect via SSH** + + +--- + +### Part 2: Install Docker & Nginx (20 minutes) + +**Step 1: Update System** + + +**Step 3: Install Nginx** + +**Verify Nginx is running:** + +--- + +### Part 3: Security Group Configuration (10 minutes) + +**Test Web Access:** +Open browser and visit: `http://` + +You should see the **Nginx welcome page**! + +📸 **Screenshot this page** - you'll need it for submission + +--- + +### Part 4: Extract Nginx Logs (15 minutes) + +**Step 1: View Nginx Logs** + +**Step 2: Save Logs to File** + +**Step 3: Download Log File to Your Local Machine** +```bash +# On your local machine (new terminal window) +# For AWS: +scp -i your-key.pem ubuntu@:~/nginx-logs.txt . + +# For Utho: +scp root@:~/nginx-logs.txt . +``` + +--- + + +## Documentation Template + +Create your `day-08-cloud-deployment.md` with this structure: + +## Commands Used +[List the key commands you used] + +## Challenges Faced +[Describe any issues and how you solved them] + +## What I Learned +[3-5 bullet points of key learnings] + +--- + + +## Why This Matters for DevOps + +This exercise teaches you: +- **Cloud infrastructure provisioning** - launching and configuring servers +- **Remote server management** - SSH, security, access control +- **Service deployment** - installing and running applications +- **Log management** - accessing and analyzing logs +- **Security** - configuring firewalls and security groups + +These are core skills for any DevOps engineer working in production. + +--- + + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to the `2026/day-08/` folder +3. Add your `day-08-cloud-deployment.md` file +4. Add your `nginx-logs.txt` file +5. Add screenshots (name them: `ssh-connection.png`, `nginx-webpage.png`, `docker-nginx.png`) +6. Commit and push your changes to your fork + +--- + +## Learn in Public +Share your Day 08 progress on LinkedIn: + +- Post 2-3 lines on deploying your first cloud server +- Share screenshot of your Nginx webpage +- Mention one challenge you faced and solved +- Optional: Share your instance IP (if comfortable) + +Use hashtags: +``` +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham +``` + +Happy Learning +**TrainWithShubham** From f5c44f9a1ba6b0d824df07d78023907323cb607e Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Sun, 1 Feb 2026 06:17:52 +0530 Subject: [PATCH 28/33] added day 9 task --- 2026/day-09/README.md | 150 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 150 insertions(+) create mode 100644 2026/day-09/README.md diff --git a/2026/day-09/README.md b/2026/day-09/README.md new file mode 100644 index 0000000000..67aea03d24 --- /dev/null +++ b/2026/day-09/README.md @@ -0,0 +1,150 @@ +# Day 09 – Linux User & Group Management Challenge + +## Task +Today's goal is to **practice user and group management** by completing hands-on challenges. + +Figure out how to: +- Create users and set passwords +- Create groups and assign users +- Set up shared directories with group permissions + +Use what you learned from Days 1-7 to find the right commands! + +--- + +## Expected Output +- A markdown file: `day-09-user-management.md` +- Screenshots of command outputs +- List of commands used + +--- + +## Challenge Tasks + +### Task 1: Create Users (20 minutes) + +Create three users with home directories and passwords: +- `tokyo` +- `berlin` +- `professor` + +**Verify:** Check `/etc/passwd` and `/home/` directory + +--- + +### Task 2: Create Groups (10 minutes) + +Create two groups: +- `developers` +- `admins` + +**Verify:** Check `/etc/group` + +--- + +### Task 3: Assign to Groups (15 minutes) + +Assign users: +- `tokyo` → `developers` +- `berlin` → `developers` + `admins` (both groups) +- `professor` → `admins` + +**Verify:** Use appropriate command to check group membership + +--- + +### Task 4: Shared Directory (20 minutes) + +1. Create directory: `/opt/dev-project` +2. Set group owner to `developers` +3. Set permissions to `775` (rwxrwxr-x) +4. Test by creating files as `tokyo` and `berlin` + +**Verify:** Check permissions and test file creation + +--- + +### Task 5: Team Workspace (20 minutes) + +1. Create user `nairobi` with home directory +2. Create group `project-team` +3. Add `nairobi` and `tokyo` to `project-team` +4. Create `/opt/team-workspace` directory +5. Set group to `project-team`, permissions to `775` +6. Test by creating file as `nairobi` + +--- + +## Hints + +**Stuck? Try these commands:** +- User: `useradd`, `passwd`, `usermod` +- Group: `groupadd`, `groups` +- Permissions: `chgrp`, `chmod` +- Test: `sudo -u username command` + +**Tip:** Use `-m` flag with useradd for home directory, `-aG` for adding to groups + +--- + +## Documentation + +Create `day-09-user-management.md`: + +```markdown +# Day 09 Challenge + +## Users & Groups Created +- Users: tokyo, berlin, professor, nairobi +- Groups: developers, admins, project-team + +## Group Assignments +[List who is in which groups] + +## Directories Created +[List directories with permissions] + +## Commands Used +[Your commands here] + +## What I Learned +[3 key points] +``` + +--- + + +## Troubleshooting + +**Permission denied?** Use `sudo` + +**User can't access directory?** +- Check group: `groups username` +- Check permissions: `ls -ld /path` + +--- + +## Submission +1. Fork this `90DaysOfDevOps` repository +2. Navigate to `2026/day-09/` folder +3. Add your `day-09-user-management.md` with screenshots +4. Commit and push + +--- + +## Learn in Public +Share your Day 09 progress on LinkedIn: + +- Post about completing the user management challenge +- Share one thing you figured out +- Mention real-world DevOps use + +Use hashtags: +``` +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham +``` + +Happy Learning +**TrainWithShubham** From 129f26ebda3425d4db8378b0bd2c780dc28afc9e Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Mon, 2 Feb 2026 11:46:29 +0530 Subject: [PATCH 29/33] Added day 10 tasks --- 2026/day-10/README.md | 117 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 117 insertions(+) create mode 100644 2026/day-10/README.md diff --git a/2026/day-10/README.md b/2026/day-10/README.md new file mode 100644 index 0000000000..f1c66a14c3 --- /dev/null +++ b/2026/day-10/README.md @@ -0,0 +1,117 @@ +# Day 10 – File Permissions & File Operations Challenge + +## Task +Master file permissions and basic file operations in Linux. + +- Create and read files using `touch`, `cat`, `vim` +- Understand and modify permissions using `chmod` + +--- + +## Expected Output +- A markdown file: `day-10-file-permissions.md` +- Screenshots showing permission changes + +--- + +## Challenge Tasks + +### Task 1: Create Files (10 minutes) + +1. Create empty file `devops.txt` using `touch` +2. Create `notes.txt` with some content using `cat` or `echo` +3. Create `script.sh` using `vim` with content: `echo "Hello DevOps"` + +**Verify:** `ls -l` to see permissions + +--- + +### Task 2: Read Files (10 minutes) + +1. Read `notes.txt` using `cat` +2. View `script.sh` in vim read-only mode +3. Display first 5 lines of `/etc/passwd` using `head` +4. Display last 5 lines of `/etc/passwd` using `tail` + +--- + +### Task 3: Understand Permissions (10 minutes) + +Format: `rwxrwxrwx` (owner-group-others) +- `r` = read (4), `w` = write (2), `x` = execute (1) + +Check your files: `ls -l devops.txt notes.txt script.sh` + +Answer: What are current permissions? Who can read/write/execute? + +--- + +### Task 4: Modify Permissions (20 minutes) + +1. Make `script.sh` executable → run it with `./script.sh` +2. Set `devops.txt` to read-only (remove write for all) +3. Set `notes.txt` to `640` (owner: rw, group: r, others: none) +4. Create directory `project/` with permissions `755` + +**Verify:** `ls -l` after each change + +--- + +### Task 5: Test Permissions (10 minutes) + +1. Try writing to a read-only file - what happens? +2. Try executing a file without execute permission +3. Document the error messages + +--- + +## Hints + +- Create: `touch`, `cat > file`, `vim file` +- Read: `cat`, `head -n`, `tail -n` +- Permissions: `chmod +x`, `chmod -w`, `chmod 755` + +--- + +## Documentation + +Create `day-10-file-permissions.md`: + +```markdown +# Day 10 Challenge + +## Files Created +[list files] + +## Permission Changes +[before/after for each file] + +## Commands Used +[your commands] + +## What I Learned +[3 key points] +``` + +--- + +## Submission +1. Navigate to `2026/day-10/` folder +2. Add `day-10-file-permissions.md` with screenshots +3. Commit and push + +--- + +## Learn in Public + +Share on LinkedIn about mastering file permissions. + +Use hashtags: +``` +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham +``` + +Happy Learning +**TrainWithShubham** From 68e1bfea847629513c95552af4c6697c776a284c Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Tue, 3 Feb 2026 10:47:00 +0530 Subject: [PATCH 30/33] Added day 11 tasks --- 2026/day-11/README.md | 216 ++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 216 insertions(+) create mode 100644 2026/day-11/README.md diff --git a/2026/day-11/README.md b/2026/day-11/README.md new file mode 100644 index 0000000000..128b04f0f1 --- /dev/null +++ b/2026/day-11/README.md @@ -0,0 +1,216 @@ +# Day 11 – File Ownership Challenge (chown & chgrp) + +## Task +Master file and directory ownership in Linux. + +- Understand file ownership (user and group) +- Change file owner using `chown` +- Change file group using `chgrp` +- Apply ownership changes recursively + +--- + +## Expected Output +- A markdown file: `day-11-file-ownership.md` +- Screenshots showing ownership changes + +--- + +## Challenge Tasks + +### Task 1: Understanding Ownership (10 minutes) + +1. Run `ls -l` in your home directory +2. Identify the **owner** and **group** columns +3. Check who owns your files + +**Format:** `-rw-r--r-- 1 owner group size date filename` + +Document: What's the difference between owner and group? + +--- + +### Task 2: Basic chown Operations (20 minutes) + +1. Create file `devops-file.txt` +2. Check current owner: `ls -l devops-file.txt` +3. Change owner to `tokyo` (create user if needed) +4. Change owner to `berlin` +5. Verify the changes + +**Try:** +```bash +sudo chown tokyo devops-file.txt +``` + +--- + +### Task 3: Basic chgrp Operations (15 minutes) + +1. Create file `team-notes.txt` +2. Check current group: `ls -l team-notes.txt` +3. Create group: `sudo groupadd heist-team` +4. Change file group to `heist-team` +5. Verify the change + +--- + +### Task 4: Combined Owner & Group Change (15 minutes) + +Using `chown` you can change both owner and group together: + +1. Create file `project-config.yaml` +2. Change owner to `professor` AND group to `heist-team` (one command) +3. Create directory `app-logs/` +4. Change its owner to `berlin` and group to `heist-team` + +**Syntax:** `sudo chown owner:group filename` + +--- + +### Task 5: Recursive Ownership (20 minutes) + +1. Create directory structure: + ``` + mkdir -p heist-project/vault + mkdir -p heist-project/plans + touch heist-project/vault/gold.txt + touch heist-project/plans/strategy.conf + ``` + +2. Create group `planners`: `sudo groupadd planners` + +3. Change ownership of entire `heist-project/` directory: + - Owner: `professor` + - Group: `planners` + - Use recursive flag (`-R`) + +4. Verify all files and subdirectories changed: `ls -lR heist-project/` + +--- + +### Task 6: Practice Challenge (20 minutes) + +1. Create users: `tokyo`, `berlin`, `nairobi` (if not already created) +2. Create groups: `vault-team`, `tech-team` +3. Create directory: `bank-heist/` +4. Create 3 files inside: + ``` + touch bank-heist/access-codes.txt + touch bank-heist/blueprints.pdf + touch bank-heist/escape-plan.txt + ``` + +5. Set different ownership: + - `access-codes.txt` → owner: `tokyo`, group: `vault-team` + - `blueprints.pdf` → owner: `berlin`, group: `tech-team` + - `escape-plan.txt` → owner: `nairobi`, group: `vault-team` + +**Verify:** `ls -l bank-heist/` + +--- + +## Key Commands Reference + +```bash +# View ownership +ls -l filename + +# Change owner only +sudo chown newowner filename + +# Change group only +sudo chgrp newgroup filename + +# Change both owner and group +sudo chown owner:group filename + +# Recursive change (directories) +sudo chown -R owner:group directory/ + +# Change only group with chown +sudo chown :groupname filename +``` + +--- + +## Hints + +- Most `chown`/`chgrp` operations need `sudo` +- Use `-R` flag for recursive directory changes +- Always verify with `ls -l` after changes +- User must exist before using in `chown` +- Group must exist before using in `chgrp`/`chown` + +--- + +## Documentation + +Create `day-11-file-ownership.md`: + +```markdown +# Day 11 Challenge + +## Files & Directories Created +[list all files/directories] + +## Ownership Changes +[before/after for each file] + +Example: +- devops-file.txt: user:user → tokyo:heist-team + +## Commands Used +[your commands here] + +## What I Learned +[3 key points about file ownership] +``` + +--- + +## Troubleshooting + +**Permission denied?** +- Use `sudo` for chown/chgrp operations + +**Group doesn't exist?** +- Create it first: `sudo groupadd groupname` + +**User doesn't exist?** +- Create it first: `sudo useradd username` + +--- + +## Why This Matters for DevOps + +In real DevOps scenarios, you need proper file ownership for: + +- Application deployments +- Shared team directories +- Container file permissions +- CI/CD pipeline artifacts +- Log file management + +--- + +## Submission +1. Navigate to `2026/day-11/` folder +2. Add `day-11-file-ownership.md` with screenshots +3. Commit and push to your fork + +--- + +## Learn in Public + +Share on LinkedIn about mastering file ownership. + +Use hashtags: +``` +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham +``` + +Happy Learning +**TrainWithShubham** From b76b3226dd9f4469bef0b961c1f8cd5a3ad764f2 Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Wed, 4 Feb 2026 12:57:33 +0530 Subject: [PATCH 31/33] added day 12 tasks --- 2026/day-12/README.md | 48 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 48 insertions(+) create mode 100644 2026/day-12/README.md diff --git a/2026/day-12/README.md b/2026/day-12/README.md new file mode 100644 index 0000000000..1e7d433d1f --- /dev/null +++ b/2026/day-12/README.md @@ -0,0 +1,48 @@ +# Day 12 – Breather & Revision (Days 01–11) + +## Goal +Take a **one-day pause** to consolidate everything from Days 01–11 so you don’t forget the fundamentals you just built. + +## Expected Output +- A markdown file: `day-12-revision.md` + (bullet notes + checkpoints) +- Optional: screenshots of any re-runs you do + +## What to Review (pick at least one per section) +- **Mindset & plan:** revisit your Day 01 learning plan—are your goals still right? any tweaks? +- **Processes & services:** rerun 2 commands from Day 04/05 (e.g., `ps`, `systemctl status`, `journalctl -u `); jot what you observed today. +- **File skills:** practice 3 quick ops from Days 06–11 (e.g., `echo >>`, `chmod`, `chown`, `ls -l`, `cp`, `mkdir`). +- **Cheat sheet refresh:** skim your Day 03 commands—highlight 5 you’d reach for first in an incident. +- **User/group sanity:** recreate one small scenario from Day 09 or Day 11 (create a user or change ownership) and verify with `id`/`ls -l`. + +## Mini Self-Check (write short answers in `day-12-revision.md`) +1) Which 3 commands save you the most time right now, and why? +2) How do you check if a service is healthy? List the exact 2–3 commands you’d run first. +3) How do you safely change ownership and permissions without breaking access? Give one example command. +4) What will you focus on improving in the next 3 days? + +## Suggested Flow (30–45 minutes) +- 10 min: skim notes from each day, update Day 01 plan if needed. +- 15–20 min: rerun a tiny hands-on set (process check, service check, file permission change). +- 5–10 min: write the self-check answers and key takeaways. + +## Tips +- Keep it light—this is about retention, not new concepts. +- If something felt shaky this week (e.g., `chmod` numbers, `journalctl` flags), practice that specifically. +- Small wins: one screenshot of a command rerun + 5 bullet notes is enough. + +## Submission +1. Navigate to `2026/day-12/` +2. Add `day-12-revision.md` with your bullets and answers +3. Commit and push to your fork + +## Learn in Public +Post 2–3 lines on what you reinforced today and one command you now remember confidently. + +Use hashtags: +#90DaysOfDevOps +#DevOpsKaJosh +#TrainWithShubham + +Happy Learning +**TrainWithShubham** From a7568c41320748ff0bf94d48a22d2f6c13cee549 Mon Sep 17 00:00:00 2001 From: LondheShubham153 Date: Thu, 5 Feb 2026 14:11:11 +0530 Subject: [PATCH 32/33] Added day 13 task --- 2026/day-13/README.md | 99 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 99 insertions(+) create mode 100644 2026/day-13/README.md diff --git a/2026/day-13/README.md b/2026/day-13/README.md new file mode 100644 index 0000000000..dafdf15e29 --- /dev/null +++ b/2026/day-13/README.md @@ -0,0 +1,99 @@ +# Day 13 – Linux Volume Management (LVM) + +## Task +Learn LVM to manage storage flexibly – create, extend, and mount volumes. + +**Watch First:** [Linux LVM Tutorial](https://youtu.be/Evnf2AAt7FQ?si=ncnfQYySYtK_2K3c) + +--- + +## Expected Output +- A markdown file: `day-13-lvm.md` +- Screenshots of command outputs + +--- + +## Before You Start + +Switch to root user: +```bash +sudo -i +``` +or +```bash +sudo su +``` +No spare disk? Create a virtual one (watch the tutorial): +```bash +dd if=/dev/zero of=/tmp/disk1.img bs=1M count=1024 +losetup -fP /tmp/disk1.img +losetup -a # Note the device name (e.g., /dev/loop0) +``` + +--- + +## Challenge Tasks + +### Task 1: Check Current Storage +Run: `lsblk`, `pvs`, `vgs`, `lvs`, `df -h` + +### Task 2: Create Physical Volume +```bash +pvcreate /dev/sdb # or your loop device +pvs +``` + +### Task 3: Create Volume Group +```bash +vgcreate devops-vg /dev/sdb +vgs +``` + +### Task 4: Create Logical Volume +```bash +lvcreate -L 500M -n app-data devops-vg +lvs +``` + +### Task 5: Format and Mount +```bash +mkfs.ext4 /dev/devops-vg/app-data +mkdir -p /mnt/app-data +mount /dev/devops-vg/app-data /mnt/app-data +df -h /mnt/app-data +``` + +### Task 6: Extend the Volume +```bash +lvextend -L +200M /dev/devops-vg/app-data +resize2fs /dev/devops-vg/app-data +df -h /mnt/app-data +``` + +--- + +## Documentation + +Create `day-13-lvm.md` with: +- Commands used +- Screenshots of outputs +- What you learned (3 points) + +--- + +## Submission +1. Add your `day-13-lvm.md` to `2026/day-13/` +2. Commit and push + +--- + +## Learn in Public + +Share your LVM progress on LinkedIn. + +``` +#90DaysOfDevOps #DevOpsKaJosh #TrainWithShubham +``` + +Happy Learning! +**TrainWithShubham** From 9a6ad4fd7cbcb39c38195a0529bbbefea3917062 Mon Sep 17 00:00:00 2001 From: adityapratappp Date: Mon, 16 Feb 2026 13:18:09 +0000 Subject: [PATCH 33/33] BTS-7 added git challenge --- 2025/git/README.md | 1 + 1 file changed, 1 insertion(+) create mode 100644 2025/git/README.md diff --git a/2025/git/README.md b/2025/git/README.md new file mode 100644 index 0000000000..695468a43b --- /dev/null +++ b/2025/git/README.md @@ -0,0 +1 @@ +## Week 3 challenge on git