Summary
I think that I, a technically skilled modern man, “need” ~$2000 of computer over 4 devices. I can walk around those machines before the sun sets.
Server
I bought the core setup consisting of a Supermicro X10SLL-F motherboard , an Intel Xeon E3-1271 v3 cpu , a used 480GB Samsung SM883 SSD with only 2% of its rated TBW (so very lightly used), and two 8Gb Kingston 1600Mhz unbuffered DDR3 ECC RAM sticks for a total of $100 from a local computer parts reseller. I just so happened to have a Seasonic Focus GX 650 ATX PSU laying around.
$ neofetch
_,met$$$$$gg. shivaji@kaveri-homelab
,g$$$$$$$$$$$$$$$P. ----------------------
,g$$P" """Y$$.". OS: Debian GNU/Linux 12 (bookworm) x86_64
,$$P' `$$$. Host: X10SLL-F 0123456789
',$$P ,ggs. `$$b: Kernel: 6.1.0-18-amd64
`d$$' ,$P"' . $$$ Uptime: 2 days, 13 hours, 50 mins
$$P d$' , $$P Packages: 760 (dpkg)
$$: $$. - ,d$$' Shell: bash 5.2.15
$$; Y$b._ _,d$P' Resolution: 1024x768
Y$$. `.`"Y$$$$P"' Terminal: /dev/pts/0
`$$b "-.__ CPU: Intel Xeon E3-1271 v3 (8) @ 4.000GHz
`Y$$ GPU: 02:00.0 ASPEED Technology, Inc. ASPEED Graphics Family
`Y$$. Memory: 11483MiB / 15950MiB
`$$b.
`Y$$b.
`"Y$b._
`"""
Server Requirements
I use this server for the following purposes:
- Learn how to run a simple Debian server, including
nonsenseimportant aspects like permissions, security, package management, app/service deployment, reproducibility/auditability, backups+recovery, etc. - Self host my password manager. I do not want to give all of my password to any single corporation.
- There are a handful more services that I want to self host, but the password manager is the big one. I will make a
full list of these, but they include:
- Jellyfin for streaming my music and pictures
- Postgres for a couple of experimental data viz projects that have been swimming around my mind for a couple of years
- Personal git server (we should assume that some day, Github won’t be nearly so nice to developers)
- Store a datalake and analytics environment for economic history datasets to explore some of my economic and historical musings. After 500Gb, I will have to think about this a bit more. These kinds of datasets are typically not huge/typically not larger than a few gigabytes. I only want sections of censuses, not an entire census or every pre-computed aggregate that one could make from a census.
- Redundantly store my most important data, including documents, pictures, and music. Notably I
do not care to stream my own videoonly want to stream up to 1080p to myself and maybe one other person. This is important because I want ECC +ZFS +3-2-1 backup policies to give me a fighting chance of preserving data in this hostile world. I should be able to read from this computer anywhere I want. Currently, there is ~20Gb of data that I seriously want to preserve across the next couple of decades
Storage
Things get a little horrifying interesting here. I realized that I probably need ~4-6 drives if I want redundant
storage. Since I knew that going past 8 drives would force me to rethink many things about this system, and because I
wasn’t prepared for a rack-mounted form factor, I decided to go with the now discontinued Corsair Vengeance C70 ATX
tower case
, or what I
affectionatly call “the Ammo Crate.” Ok ok I know this is easily the stupidest part of the server. At $150 used, the
case is almost as expensive as all of the parts put together. But not so many cases have 6 3.5 inch drive bays. It also
just “feels” indestrictible. I just like the look and feel. It also enables decent airflow.
Redundantly storing 20-500Gb of data is a solved enough problem that there are atleast 3 good options. For one, the capacity is small enough that I can go with an all-SSD storage setup. The only loud, power-hungry spinning metal in my life is my single external HDD that I pulled from an old laptop and put in an enclosure to use as my “cold” backup. I manually verify it and make sure that the most recent tars are there and readable once per month. That is good enough for my uses.
I decided on RAID-1 because it was the simplest path to redundancy, while giving me good read performance, at the expense of capacity. If I really need more space, either I can use a completely separate system/zpool, or I can add 2 more 512Gb SSD, switch to RAIDZ2, and triple my available storage capacity. But for now, mirrors are the simplest and most performant way of achieving the redundancy I want.
$ zpool status
pool: vp-zpool
state: ONLINE
scan: resilvered 14.3G in 00:00:58 with 0 errors on Sat Mar 23 15:00:57 2024
config:
NAME STATE READ WRITE CKSUM
vp-zpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdb ONLINE 0 0 0
sdc ONLINE 0 0 0
sdd ONLINE 0 0 0
$ lsblk -io KNAME,TYPE,SIZE,MODEL
KNAME TYPE SIZE MODEL
sda disk 223.6G SanDisk SDSSDA240G
sda1 part 222.6G
sda2 part 1K
sda5 part 975M
sdb disk 447.1G SAMSUNG MZ7KM480HMHQ-00005
sdb1 part 447.1G
sdb9 part 8M
sdc disk 447.1G SAMSUNG MZ7KH480HAHQ-00005
sdc1 part 447.1G
sdc9 part 8M
sdd disk 447.1G Micron_5100_MTFDDAK480TCB
sdd1 part 447.1G
sdd9 part 8M
The dirty secret for this: each SSD used in vp-zpool
is USED or “open-box” from Ebay and my local “server parts guy.”
I have a local “used consumer computer” guy and a “used server parts guy.” We live in interesting times.
I am gambling that “decent used enterprise SSDs can be better than most new consumer grade SSDs.” I am basing this on the fact that in 2024, most consumer grade SSDs are rated/warrantied for 200-500TBW, while the Micron 5100 Pro is rated for > 1 petabyte while those Samsung 863a drives are rated for ~2.6 petabytes before the nand cells wear out. When I built the server, I had ~20Gb of data that is important enough to store redundantly and even if I was to write 20Gb per day, consumer grade drives might last me 20 years. I know there are all sorts of reasons why drives might fail prematurely. But as long as this zpool stays scoped to “my most important personal documents and media,” these might last me a lifetime because I add a relatively small amout of data most days. If I want to store > 100Gb of video, or parquet or some other garbage, I will probably build something more dedicated because I want this storage do “do a lot with little.”
The key here is using smartctl
(from
smartmontools
) to inspect the Self-Monitoring, Analysis, and Reporting
Technology
stats of each device and
compare that to the manufacturer-provided manual, and run integrity checks. I still need to find a good analog of
memtest86+
for SSDs to really assuage my concerns,
I am partly comfortable with this gamble on used/open box SSDs because I would bet money that many manufacturers build a TON of flash chips and just use the “better” ones in higher end consumer + enterprise SSD models (except PLP and maybe controllers). I don’t think they specifically manufacture consumer grade vs enterprise SSDs. Maybe someone can prove otherwise for all of the major players in this space (Samsung, Kioxia, Micron, Western Digital, SK Kynix, etc.).
I also took care to use 2 different brands in my zpool because I want to minimize the probability of “systematic” production errors that might end up nuking my data. If I had to guess, the biggest threats to the integrity of this pool are MOBO/PSU errors that kill all drives at once, yanking on the one cable that powers all drives, fire/flooding, theft, or a hammer.
Offsite Backup
Sadly I still use AWS S3 as my offsite backup. I really feel that $0.25 monthly cloud bill :( And I still need a good (convenient and secure) encryption scheme for the tars that I upload. Honestly I just don’t care if Amazon can see my family pictures. I would prefer if they didn’t but I wont lose sleep. I just don’t think they care enough to look at anything I have when there is much more valuable crap in S3.
Improvements
I suspect that the PSU is probably operating at ~75% effeciency for most of the time for my server workloads. With some research, I think I can find a 150-200W modular ATX PSU that can exceed 90% efficiency which would pay for itself in ~2-4 years. I will probably swap this out in the not too distant future. Apparently many PSUs are most efficient at 30-70% load and this server really just doesn’t draw too much power. I think “85W TDP” roughly means that I shouldn’t expect the CPU to take more than 85W when doing serious work, which seems pretty good for my uses.
My boot drive is just some entry level SanDisk SSD that doesn’t even have a rated/warrantied TBW. I really wanted to replace that with RAID-1 58Gb Intel Optane NVME M.2 SSDs but I wasn’t sure my BIOS would support that, nor was I quite ready to do anything more fancy than updating the BIOS. For the boot drive, 32-64Gb is plenty.
I hope that in 10 years, I can swap out the MOBO+Xeon with something else (possibly ARM-based) that can idle at <10 watts with RAID-1 NVME boot SSDs. But this will be great for a while.
At some point, I will likely get two more 8Gb 1600Mhz ECC DDR3 UDIMMs to give my server a total of 32GB of RAM. That is a lot for serving (computers of) 1-5 people, while serving one person for the vast majority of the server’s life.
I have 16 unused PCIE lanes split across one PCIE 3.0 x8 in x16 slot, one PCIE 3.0 x8 slot, and one PCIE 2.0 x4 in x8 slot. Three of the most common uses for those PCIE lanes in this type of machine are the following.
- Host Bus Adapter (HBA) or M.2 adapter for more storage. Given that I only have 16 PCIE lanes, I could probably put 4 M.2 NVME or 2.5-inch SATA drives. 3GB/s is a lot, actually. I would make a separate zpool with a single 3-way mirror vdev and possibly use that last drive for my /home and /var directories on the host.
- Low-end GPU for live transcoding. My Intel 1271v3 Xeon has been working well so far, even without any integrated GPU, but I can easily imagine situations where a $50-80 used GPU may be required to reasonably transcode some movies.
- A 2.5Gbps (x4) or 10Gbps (x8) network interface card (NIC), for faster networking and file transfers. The Supermicro X10SLL-F motherboard already has two 1Gbps ethernet/RJ45 ports and that is quite a decent amount.
For me, ~100Mb/second (=1Gbps) networking is quite a lot and I just don’t think that my network speed is a huge bottleneck right now. One way that could change is if I decouple my storage and compute and need to send many large parquet files to some compute node. My router supports up to 1.75Gbps, so by having two RJ45 ports (not counting one for IPMI), which is enough to saturate my router(/switch). Therefore to upgrade the networking, I would also probably need a new router/switch and cables, likely bringing the total to >=$500.
But faster networking doesn’t seem like it will be useful for me in the near future. Because I don’t anticipate a network bottleneck, and because I don’t want a powerful GPU on my server, atleast half of the PCIE lanes should assist with storage by supporting something like an HBA. If/when I want more storage for movies/media, I will buy 3 (probably SATA) 1TB<=x<=4TB SSDs with an HBA, I value low-noise and low-power drives and don’t have large capacity requirements. So upgrading my storage will probably come out to $500 when the time comes. I really don’t want to get the crazy 12-16Tb HDDs. I would like to think that I can be more deliberate with what I store. Even 1Tb is actually an amount of data that would have been damn near unfathomable to any human born between 10000 BCE – 1950 CE.
Upgrading the RAM to 32Gb, using a smaller (~250W) and more efficient PSU, possibly adding a small GPU for transcoding, and adding maybe 2-4Tb of more storage constitute the full length of where this server can go.
Lessons Learned
- If India and China go to war, and if India is cut off from Japan, Korea, and Taiwan, India is so fucked. There is not a single competitive SSD manufacturer in India, and AFAICT, at most, a small amount of drives might be manufactured for the defense industry. It’s probably not just SSDs. I really wanted a reliable Indian drive in my server/zpool, but in 2024, it wasn’t meant to be. Maybe in my lifetime.
- Most people shouldn’t run their own server. It’s not too difficult though, atleast not if you are professional software engineer…
- The used server parts market is a bit fun.
- 3-2-1 backup is very attainable.
Workstation
My workstation is a tank. It needs to: compile large software projects, run whatever games I want (it turns out I don’t like most games with very high end graphics because they frequently invest less energy into gameplay), crunch numbers, create graphs, and more. I use Fedora 99% of the time and dual boot Windows literally only for gaming. There are no other native programs for which I require Windows and Linux makes better use of every bit of my hardware, except apparently the GPU.
$ neofetch
.',;::::;,'. shivaji@yui
.';:cccccccccccc:;,. -----------
.;cccccccccccccccccccccc;. OS: Fedora Linux 39 (Workstation Edition) x86_64
.:cccccccccccccccccccccccccc:. Host: B450 AORUS ELITE
.;ccccccccccccc;.:dddl:.;ccccccc;. Kernel: 6.7.9-200.fc39.x86_64
.:ccccccccccccc;OWMKOOXMWd;ccccccc:. Uptime: 53 mins
.:ccccccccccccc;KMMc;cc;xMMc:ccccccc:. Packages: 2616 (rpm), 35 (flatpak)
,cccccccccccccc;MMM.;cc;;WW::cccccccc, Shell: bash 5.2.26
:cccccccccccccc;MMM.;cccccccccccccccc: Resolution: 1920x1080, 2560x1440
:ccccccc;oxOOOo;MMM0OOk.;cccccccccccc: DE: GNOME 45.4
cccccc:0MMKxdd:;MMMkddc.;cccccccccccc; WM: Mutter
ccccc:XM0';cccc;MMM.;cccccccccccccccc' WM Theme: Adwaita
ccccc;MMo;ccccc;MMW.;ccccccccccccccc; Theme: Adwaita [GTK2/3]
ccccc;0MNc.ccc.xMMd:ccccccccccccccc; Icons: HighContrast [GTK2/3]
cccccc;dNMWXXXWM0::cccccccccccccc:, Terminal: gnome-terminal
cccccccc;.:odl:.;cccccccccccccc:,. CPU: AMD Ryzen 7 2700X (16) @ 3.700GHz
:cccccccccccccccccccccccccccc:'. GPU: NVIDIA GeForce RTX 3070 Lite Hash Rate
.:cccccccccccccccccccccc:;,.. Memory: 5115MiB / 32032MiB
'::cccccccccccccc::;,.
$ lsblk -io KNAME,TYPE,SIZE,MODEL
KNAME TYPE SIZE MODEL
sda disk 931.5G Samsung SSD 860 EVO 1TB
sda1 part 16M
sda2 part 696.2G
sda3 part 715M
sda4 part 1G
sda5 part 148G
zram0 disk 8G
dm-0 crypt 109.6G
nvme0n1 disk 119.2G SAMSUNG MZFLV128HCGR-000MV
nvme0n1p1 part 600M
nvme0n1p2 part 1G
nvme0n1p3 part 8G
nvme0n1p4 part 109.7G
Fedora boots from an M.2 NVME SSD, in large part because this is the first time I am giving this form factor a chance; all of my other computers have used SATA for everything. Windows boots from the Samsung 860 EVO 1TB, with an extra 150Gb partition as my “dev workspace” which is the main place I write software.
Some might argue that a 128Gb drive is too small for a fedora workstation, but since all of my computers access the “real” data on my server via SMB , that 128Gb only needs to support the OS and application binaries+config. I have thought about using the Network File System (NFS) for centralized access to my important files, but I do sometimes view them from Windows and I have read that SMB/CIFS is simpler on Windows than NFS. When Linux gaming becomes viable, I might uninstall Windows altogether and use NFS rather than SMB. But for now, Windows and SMB are dark parts of my life.
Improvements
To be honest, my workstation does everything I ask of it. Supposedly, my BIOS version for this MOBO supports AMD Ryzen 5000 series CPUs. At some point in the future, I might upgrade to something like the Ryzen 5800X. UserBenchmark describes this upgrade as going from a “battleship” to a “nuclear submarine” in their stupid, fun rating system. I bet that in 5 or so years, that could be a ~$250 investment/upgrade.
For a while, I was a bit uneasy about the MOBO’s lack of ECC RAM support, but building my server as the machine that handles all of my important data assuaged those concerns. The MOBO supports up to 128GB of RAM, and there is a decent chance that I upgrade to 64GB in a few years when everyone’s shitty websites eat up more of my RAM.
For a while, I thought that this CPU supported ECC RAM, but it doesn’t actually use the ECC. Only “Pro” versions of this CPU can actually use the ECC. When I felt that the time was right to upgrade my workstation’s CPU, I was hoping to use the Ryzen 2700X that it currently uses in a second node for my homelab. But it wasn’t meant to be 😢.
The GPU is pretty nice though, and I don’t imagine needing to upgrade it any time soon. Unless it breaks. Possibly by my hands.
Laptop
I bought a refurbished Thinkpad T495s for ~$300 from my “local used computer guy” because I became very enamoured with the idea and feel of a “light Ryzen Thinkpad” – so enamoured that I overlooked the “measly” 8Gb of RAM. The “s” stands for “slim”. I initially visited my used-computer guy to buy some parts for my server, but this was the first lightweight Thinkpad I had ever seen and it called out to me. After I started working at a new job where I had the option to get a macbook or an X1 Carbon Thinkpad, I chose the latter and found out that the X1 Carbon is actually thinner and lighter! I don’t know if it is because that work machine runs windows or maybe the X1 Carbon is just not as power efficient, but it runs pretty hot, even when it isn’t doing much.
$ neofetch
.',;::::;,'. yui@fedora
.';:cccccccccccc:;,. ----------
.;cccccccccccccccccccccc;. OS: Fedora Linux 39 (Workstation Edition) x86_64
.:cccccccccccccccccccccccccc:. Host: 20QJ000AUS ThinkPad T495s
.;ccccccccccccc;.:dddl:.;ccccccc;. Kernel: 6.7.7-200.fc39.x86_64
.:ccccccccccccc;OWMKOOXMWd;ccccccc:. Uptime: 6 days, 10 hours, 40 mins
.:ccccccccccccc;KMMc;cc;xMMc:ccccccc:. Packages: 3721 (rpm), 21 (flatpak)
,cccccccccccccc;MMM.;cc;;WW::cccccccc, Shell: bash 5.2.26
:cccccccccccccc;MMM.;cccccccccccccccc: Resolution: 1920x1080
:ccccccc;oxOOOo;MMM0OOk.;cccccccccccc: DE: GNOME 45.4
cccccc:0MMKxdd:;MMMkddc.;cccccccccccc; WM: Mutter
ccccc:XM0';cccc;MMM.;cccccccccccccccc' WM Theme: Adwaita
ccccc;MMo;ccccc;MMW.;ccccccccccccccc; Theme: Adwaita [GTK2/3]
ccccc;0MNc.ccc.xMMd:ccccccccccccccc; Icons: Adwaita [GTK2/3]
cccccc;dNMWXXXWM0::cccccccccccccc:, Terminal: gnome-terminal
cccccccc;.:odl:.;cccccccccccccc:,. CPU: AMD Ryzen 5 PRO 3500U w/ Radeon Vega Mobile Gfx (8) @ 2.100GHz
:cccccccccccccccccccccccccccc:'. GPU: AMD ATI Radeon Vega Series / Radeon Vega Mobile Series
.:cccccccccccccccccccccc:;,.. Memory: 5712MiB / 6811MiB
Laptop Requirements
- Lighter than 1.5 Kg. Ideally around 1Kg. Any heavier and it becomes noticable to carry it around all day (have it on my back for 8+ hours). My previous Thinkpad W540 was a bear at ~2.7Kg.
- It only needs “Chromebook” level performance: internet browsing, simple document manipulation, and SSHing into another server.
- I do not run any chonky and/or native applications that rule out huge numbers of laptops, such as Photoshop, games, AutoCAD, etc. The fact that I do not need/want my laptop to be my primary/most powerful computer opens up many good options.
In many cases, 8Gb of RAM (with ~1Gb dedicated to graphics) is just not enough, and 8Gb is right on the edge of what I would recommend for most people today. But for me, 8Gb is plenty. If I need more, I SSH into my work station. I need my laptop so I can do “work” for a few hours while sitting outside or on a vacation.
If this had 16Gb on RAM, and 4 more hours of battery life (6-10 hours of “real” use is pretty good, but 10-14 would be insane), and was more repairable, it might be my perfect laptop. I expect this Thiknpad T495s to last me ~4-6 years. Then maybe the kind of laptop I want will be viable in the ~$500-700 price point. I really want my next laptop to last me 10+ years. But this Thinkpad probably won’t be that laptop. The Thinkpad T495s is not repairable/upgradable by Lenovo standards, as many parts are soldered to the motherboard. So I am still looking for the last computer I ever need to buy. My next might be a Framework, especially if they can make a swappable motherboard for the Qualcomm Snapdragon X ARM chips that are giving the Mac M2 and M3 CPUs a run for their money.
Phone
Sadly I bought a Samsung Galexy S22 because stupid T-Mobile decided that my dad’s old Samsung Galexy S8 wasn’t “good enough” for them anymore and it wouldn’t be supported in their “5G” network. I had a lot going on at the time and just needed a phone, so I didn’t put any thought into it. My dad used that S8 phone for a few years, then preferred one with better international support, so I dug out his phone because it was better than what I was running at the time.
I just don’t use my phone for much besides calling, messaging, firefox, and music (through Jellyfin , possibly via Firefox). Really, I skim a lot of Wikipedia in small bits of downtime on the train/subway, before a friend arrives, etc.
I think the next phone I buy in 5-7 years will probably be leaner, smaller, a bit “dumber,” maybe a bit more power efficient, and I hope will cost < $400. I will probably look at Oppo, Huawei, OnePlus, etc. I actually rather like Android and think it is an acceptable amount of “googleified.” I am still on the lookout for the last phone I ever need to buy.
Conclusion
For those comfortable enough to tear apart and reassemble their computer, I would highly recommend looking for local computer repair/recycle/resell stores. I would bet that atleast 70% of Americans live within a 20 minute drive of one. They can have excellent deals. The enterprise sloppy seconds taste pretty good. People who run those stores typically don’t get too many visitors and truly enjoy building+using computers. Ebay is nice – but you can also see the Ebay sellers…in person.
As a modern, technically-literate man, I guess I “need”/enjoy having four computers: a workstation for fun and for profit, a server as my “second brain”, a phone for communication and wikipedia, and a Thinkpad for a mobile “real” screen+keyboard+trackpad/mouse package. And my router, modem, TV, microwave, washing machine, etc. In reality there are a ton of computers in my life, but those four are most under my control and the ones that are most deserving of my attention.