Skip to main content

Posts

Site to Site VPN with Tailscale

It is likely to have 2 or more different locations to spread your devices to have geographically redundancy. It is to send our backups to a different location, distribute the load of our services or to sync our live data to somewhere else. When considering the big tech giants, cloud providers this is done through zones, regions etc. However, for home or small business level use cases, we cannot have switches, routers that can carry enormously high amount of data in between different locations (here we are not talking about one room to another in the same place). Therefore, we need to rely on the public internet services to communicate to the rest of the world. Assuming a setup that you want to share your services with your best friend, so that he can utilize what you have, but meanwhile you can use your friend's remote NAS to store your backups for a predefined period of time. Although there are several ways to be able to accomplish such a scenario, here I will be focusing on Tails...

Benchmark Key Value Swapping of a Dictionary in Python

Today, I was writing a python script and needed to swap the keys and values in a dictionary. While doing it, I wanted to see how long it takes to perform such a swap operation on small dictionaries. To determine which solution is the fastest on CPU, we can use the  timeit  module in Python to benchmark each solution. The methods I will be testing is that A similar way to list comprehension using  zip() Iterating the dictionary keys with for loop Here’s an example benchmark: import timeit existing_dict = { "a" : 1 , "b" : 2 , "c" : 3 } def solution1 (): return {v: k for k, v in existing_dict . items()} def solution2 (): return dict(zip(existing_dict . values(), existing_dict . keys())) def solution3 (): swapped_dict = {} for k, v in existing_dict . items(): swapped_dict[v] = k return swapped_dict print( "Solution 1:" , timeit . timeit(solution1, number = 100000 )) print( "Solution 2:" , time...

Advanced Raid Failure Simulations Using Mdadm

Introduction RAID (Redundant Array of Independent Disks) provides fault tolerance and performance benefits, but even the best setups can experience failures. Understanding how RAID handles disk failures and how to recover from them is crucial for system administrators. In this guide, we will simulate RAID failures using  mdadm , analyze failure scenarios, and practice recovery techniques for RAID 0, 1, 5, and 10. Preparing a RAID Environment Before simulating failures, ensure you have a working RAID setup. If you don’t already have a RAID array, create one using the guide from our previous article. Simulating Failures in RAID RAID 0 (Striping) – Single Disk Failure RAID 0 offers performance benefits but no redundancy. A single disk failure leads to total data loss. Failure Simulation: sudo mdadm --fail /dev/md0 /dev/sdb Check RAID Status: cat /proc/mdstat sudo mdadm --detail /dev/md0 Expected Outcome: The entire RAID array fails, making data recovery impossible. If thi...

Troubleshooting Your RAID: A Quick Guide

Introduction RAID (Redundant Array of Independent Disks) is designed to improve storage reliability, performance, and redundancy. However, disk failures, data corruption, and array degradation can still occur. This guide provides an introductory approach to diagnosing and resolving RAID disk issues effectively. Common RAID Issues and Symptoms 1. Degraded RAID Array If your RAID is operational but running in degraded mode due to a failed disk, then you can check the following cat /proc/mdstat sudo mdadm --detail /dev/md0 Here you can identify the failed disk and replace it with a new one. 2. Failed RAID Rebuild If the RAID rebuild fails to complete, or the array remains degraded, then you can check the mdraid details sudo mdadm --detail /dev/md0 Go for the logs to check something is logged related to mdraid or disk failure. You can also consult to  dmesg  command. sudo dmesg | grep md Verify disk health using SMART diagnostics and retry the rebuild. 3. RAID Not Detecting a New...

Setting Up Raid on Linux Using Mdadm

Introduction Redundant Array of Independent Disks (RAID) is a technology that enhances storage performance, redundancy, or both. While hardware RAID has been around for decades, Linux users commonly employ  mdadm (Multiple Device Admin)  to create and manage software RAID arrays. In this guide, we will explore RAID’s history, its early challenges, and how to set up various RAID levels (RAID 0, 1, 5, 10) using  mdadm  in Linux. A Brief History of RAID RAID was first conceptualized in  1987  by  David A. Patterson, Garth A. Gibson, and Randy H. Katz  at UC Berkeley. Their paper, “A Case for Redundant Arrays of Inexpensive Disks,” introduced multiple RAID levels, each balancing speed, fault tolerance, and cost-effectiveness. Before RAID became mainstream, storage solutions relied on  single large expensive disks (SLEDs) . Failures meant complete data loss, and expanding storage was cumbersome. RAID allowed combining smaller, cheaper disks into m...

Getting Started With Ceph

Ceph Introduction Ceph  has evolved a lot after its birth at 2007, habing important milestones like  RedHat  and later on becomes  IBM  after they acquired RedHat. The users and administrators may have know a lot about it, but these milestones probably take the software a lot further that it was thought to be. After the IBM era, Ceph is aimed to have one major relase per year, keep the last 2 supported and expire the rest. One of the biggest advantages of Ceph is that it is horizontally scalable or in brief it has a scale out architecture. There are also several softwares serving storage under different orientation, such as  MinIO ,  GlusterFS ,  Open ZFS ,  DRDB ,  Lustre  etc. They all have pros and cons against each other. Of course this list can be extended including the proprietary softwares and appliances, but here we are mostly touching to opensource softwares. Since Ceph has a wide storage offerings, it can be used in many d...