<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ScottDotDot </title>
	<atom:link href="http://s.co.tt/tag/backups/feed/" rel="self" type="application/rss+xml" />
	<link>http://s.co.tt</link>
	<description>Babblings of a computer curmudgeon.</description>
	<lastBuildDate>Mon, 26 Jan 2026 16:08:52 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1</generator>
	<item>
		<title>Testing Tape Backup on LTO-8 and Looking at HPE MicroServer G10+ v2</title>
		<link>http://s.co.tt/2025/04/08/testing-tape-backup-on-lto-8-and-looking-at-hpe-microserver-g10-v2/</link>
		<comments>http://s.co.tt/2025/04/08/testing-tape-backup-on-lto-8-and-looking-at-hpe-microserver-g10-v2/#comments</comments>
		<pubDate>Wed, 09 Apr 2025 02:50:14 +0000</pubDate>
		<dc:creator><![CDATA[Scott]]></dc:creator>
				<category><![CDATA[Computers]]></category>
		<category><![CDATA[backups]]></category>
		<category><![CDATA[dell]]></category>
		<category><![CDATA[IBM]]></category>
		<category><![CDATA[LTO]]></category>
		<category><![CDATA[tape]]></category>

		<guid isPermaLink="false">http://s.co.tt/?p=2336</guid>
		<description><![CDATA[Taking a look at a Dell-branded LTO Ultrium 8 external SAS tape drive that I got new in box from eBay. (It&#8217;s actually an IBM drive and enclosure, as it turns out.) Paired the drive with an HPE MicroServer G10+ v2 that was also a NIB eBay find. Turns out it&#8217;s not the right machine with which to use this, but at least was enough to get the drive tested.]]></description>
				<content:encoded><![CDATA[<p><center><iframe width="640" height="360" src="https://www.youtube.com/embed/J6ygaavoCbc?si=pUtLeRQ6LMY5puMb" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></center></p>
<p>Taking a look at a Dell-branded LTO Ultrium 8 external SAS tape drive that I got new in box from eBay.  (It&#8217;s actually an IBM drive and enclosure, as it turns out.)  Paired the drive with an HPE MicroServer G10+ v2 that was also a NIB eBay find.  Turns out it&#8217;s not the right machine with which to use this, but at least was enough to get the drive tested.</p>
]]></content:encoded>
			<wfw:commentRss>http://s.co.tt/2025/04/08/testing-tape-backup-on-lto-8-and-looking-at-hpe-microserver-g10-v2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>New Home Backup Server (Dell T640 with 18 20TB Disks)</title>
		<link>http://s.co.tt/2024/06/13/new-home-backup-server-dell-t640-with-18-20tb-disks/</link>
		<comments>http://s.co.tt/2024/06/13/new-home-backup-server-dell-t640-with-18-20tb-disks/#comments</comments>
		<pubDate>Thu, 13 Jun 2024 04:39:50 +0000</pubDate>
		<dc:creator><![CDATA[Scott]]></dc:creator>
				<category><![CDATA[Computers]]></category>
		<category><![CDATA[backups]]></category>
		<category><![CDATA[computer]]></category>
		<category><![CDATA[dell]]></category>
		<category><![CDATA[fan noise]]></category>
		<category><![CDATA[fans loud]]></category>
		<category><![CDATA[howto]]></category>

		<guid isPermaLink="false">http://s.co.tt/?p=2315</guid>
		<description><![CDATA[In this video I create a new backup server from a Dell EMC PowerEdge T640 with 18x 20TB Seagate Exos refurbished drives. Also there&#8217;s some more ranting about backups in general. But idk, there are chapters so you can skip to whatever. And it&#8217;s not technically a server, in that it doesn&#8217;t serve files. It&#8217;s really more of a client that takes files from other servers and holds onto them. But I call it a backup server because hardware-wise it&#8217;s a server. Anywho, that&#8217;s the description. It&#8217;s probably not optimal for the YouTube algorithm, but hey, I&#8217;m not an influencer even though I have a TikTok account that I don&#8217;t use. For posterity, here&#8217;s a transcript of the video in … <a class="continue-reading-link" href="http://s.co.tt/2024/06/13/new-home-backup-server-dell-t640-with-18-20tb-disks/"> Continue reading</a>]]></description>
				<content:encoded><![CDATA[<p><center><iframe width="640" height="360" src="https://www.youtube.com/embed/clvmAuAe2_g?si=X_Bh0-MXqsKMW04M" title="YouTube - New Home Backup Server (Dell T640 with 18 20TB Disks)" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></center></p>
<p>In this video I create a new backup server from a Dell EMC PowerEdge T640 with 18x 20TB Seagate Exos refurbished drives.  Also there&#8217;s some more ranting about backups in general.  But idk, there are chapters so you can skip to whatever.  And it&#8217;s not technically a server, in that it doesn&#8217;t serve files.  It&#8217;s really more of a client that takes files from other servers and holds onto them.  But I call it a backup server because hardware-wise it&#8217;s a server.  Anywho, that&#8217;s the description.  It&#8217;s probably not optimal for the YouTube algorithm, but hey, I&#8217;m not an influencer even though I have a TikTok account that I don&#8217;t use.</p>
<p><strong><em>For posterity, here&#8217;s a transcript of the video in case you&#8217;d rather read than listen.  But beware!  It&#8217;s as transcribed by YouTube and so may include some errors and lack of punctuation (and etc).</em></strong></p>
<p>hi everybody I&#8217;m Scott and this is a poweredge t640 with an 18 drive capacity up front uh you can see it&#8217;s got space for six up here and then some blanks for two rows of six more below that this is going to be my new backup server this monitor shows that it&#8217;s been under test for this is bad to look at one second well it&#8217;s still dim but yeah it&#8217;s been under test for almost 24 hours and because this is an eBay special and by special I mean it&#8217;s just fine it&#8217;s not like I don&#8217;t mean that in negative way got off eBay because I&#8217;m you know it&#8217;s a little bit of an older machine I think the t640 has been around for a while but this particular model is from 2021 so as of recording this it&#8217;s only about 3 years old which actually makes it one of the newer systems I&#8217;ve had down here in a while the reason it&#8217;s new isn&#8217;t because it&#8217;s particularly great it&#8217;s only with silver CPUs I think 8 core 2.1 GHz the CPUs aren&#8217;t important because this thing is not going to be doing anything CPU intensive if you couldn&#8217;t guess from the front of it it&#8217;s going to be doing something dis intensive and that is backing up because I said it&#8217;s a backup server and what do I mean by that it&#8217;s basically just going to be a file server only it&#8217;s not going to be accessible from the outside it is only going to pull files into it from other servers for the purposes of backups and it&#8217;s also I&#8217;m not going to reside in my basement I have a closet upstairs on the second floor of my house that currently has two computers and a disc array yeah there it is uh kind of teetering on top of one of the computers and those are both obviously homebuilt machines they&#8217;re nothing fancy whatsoever spec-wise in fact they&#8217;re pretty awful but they just have a ton of discs not enough discs though and that is the reason for this guy because one of them is completely full on space and the other one&#8217;s getting close yeah as of right now I have some video F that aren&#8217;t even getting backed up which I don&#8217;t like this guy will be paired with 18 drives oh I should get the drives most important part of the project is the [ __ ] drives but I forgot them these are the aforementioned drives right here well it&#8217;s a box these drives were another eBay purchase and right here we have 400 terabytes of raw storage in the form of these see gate exos I&#8217;m assuming that&#8217;s how you pronounce that x22 drives 20 tab a piece and there are 20 of them as I said before this is an 18 Bay chassis so obviously it&#8217;s two extra drives but that&#8217;s good because these were sold as refurbished in other words used I don&#8217;t think they&#8217;re manufacturer refurbished Al they do look like they&#8217;re in really good condition and I got them obviously much cheaper than new drives of this capacity so while it&#8217;s a risk I visited a well visited that sounds weird I found a couple of posts on Reddit about people who bought these drives or similar drives from the same seller who sells them in bulk and they all had good experiences this was on groups like uh serve the home and some other Enthusiast groups let&#8217;s say who generally know what they&#8217;re talking about and probably wouldn&#8217;t spam that for no reason so yeah hopefully these are okay I&#8217;m going to test them out obviously before I put them in production this is for a home backup server that&#8217;s why I put production in quotes and I should note that most of the data it&#8217;s going to be back up to these drives is on This Server right here and this dis array right here and between the two of those there&#8217;s about 150 terabytes of storage that are that&#8217;s in use and even those are getting kind of full so I&#8217;m probably going to be upgrading them soon as well maybe with more drives from the seller if they work out so it&#8217;s a lot of data it&#8217;s a lot of video files and obviously with this much storage on the backup server it&#8217;ll be more than enough to back that up and plus I have some other servers dis images database server backups stuff like that that are all going to go on this machine as well as another off-site machine I didn&#8217;t mention the reason this is upstairs is in case the basement floods or there&#8217;s a fire or something at least presumably This Server would be unaffected being two stories up in a completely different part of the house it&#8217;s not a huge house but still it&#8217;s uh at least out of the basement and that&#8217;s why I also have off-site backup servers just in case the whole house burns down um I would lose some of my Roll video files and some other stuff that&#8217;s not Mission critical or hyper Mission critical but uh most my important data would be safe there and elsewhere as well and I&#8217;m trying to get more serious about my backup so uh not long from now I&#8217;ll have a video featuring these backup tapes because I have a lot of data that exists but doesn&#8217;t change a lot so I just want to back it up once store it off site on tape and this way uh a lot less concerned about hard drives failing so the first step before I can test all 20 of these drives or rather 18 at a time is to take them out of their baggies put them in sleds get the sleds caddies whatever you want to call them loaded into the server and I&#8217;m going to be running this this has I think an h330 a perk h330 card which is a RAID card I&#8217;m going to be using it in HBA mode so these will just be treated as a jbod just a bunch of discs and the OS will see each disc individually so I can use Smart modon Tools check the status of these drives see how old they are see how many hours they have on them Etc and the real burning test for these drives is simply going to be to create a raid array and I&#8217;m going be using software raid obviously Linux software raid and that&#8217;s raid six now thing I&#8217;m struggling with is it&#8217;s a really bad idea to put 18 20 terab drives in one big fat array</p>
<p>[Music] and that really comes down to drive failures during array rebuilds is with raid six you can withstand up to two Drive failures before data loss on the third drive failure so you say h it&#8217;s no big deal take the drive out the bad Drive put a new good drive in starts rebuilding that&#8217;s great the problem is that when you do that it puts all the discs in the system under heavier than normal load because this server is going to be under very low load normally if a disc is going to fail it&#8217;s probably going to fail under load especially at higher temperatures when the entire Drive cage gets a little hotter from all these drives working not working their asses off because you know they&#8217;ll be reading anyway the point is whole thing gets warmer drives work harder another Drive is more likely to fail during the rebuild than at any other time and besides this is just a certain probability that drive will fail at any time with a 20 tbte drive to rebuild assuming a 100 megabyte per second let&#8217;s say of uh data streaming into the drive so let&#8217;s say that&#8217;s 100 megabytes per second going into the drive that&#8217;s a gigabyte every 10 seconds a terabyte every 10,000 seconds and 20 terabytes every 200,000 seconds a day is 86,400 seconds so it&#8217;s going to take two and a half days call it under optimal conditions to rebuild one drive in the array is second Drive goes bad you replace that in the middle of the rebuild it&#8217;s probably going to slow things down overall so you&#8217;re probably I mean in theory it might not but it probably will so then you&#8217;re talking even longer to do the second Drive rebuild while the first drive is still rebuilding you&#8217;re putting the whole array under load for even longer another Drive failing is a definite possibility the question is do I split this up into two n dis arrays or just go for the Gusto and do one big fat 18 Drive raid six array it&#8217;s inadvisable but the difference is of course with two raid six arrays you have two parody drives uh two more parody drives than you would otherwise which is 40 terabytes of less usable storage to put another way with an 18 Drive array you have two parity drives and 16 data drives which is 320 terabytes of usable storage it&#8217;s less when formatted but let&#8217;s just say for round numbers about 300 terabytes of usable storage if I split it up into two n dis Aries that&#8217;s only 14 drives of uh data total which is 280 terabytes call it 260 formatted maybe 250 so yeah it it&#8217;s really do I want to lose that extra 40 terabytes of storage I&#8217;m thinking I&#8217;m going to take the risk and that&#8217;s only because because I wouldn&#8217;t advise you do this necessarily the only reason I think I&#8217;m going to take that risk is because I have an off-site backup server my main servers are all at least raid six I have raid Z3 I&#8217;m using ZFS on the server behind me so it&#8217;s really only in the event of catastrophic failure and plus I&#8217;m going to be doing tape backups and storing the tape securely off site so I&#8217;m not super concerned about having this fail contemporaneously with the catastrophic failure of the server down here anyway I&#8217;m still on the fence we&#8217;ll see at first I&#8217;m just going to put 18 drives in here we&#8217;re going to test the 18 drives I&#8217;m going to build an 18 disc raid six at first just to burn in the discs because that array rebuild will take quite a lot of time and work the discs pretty thoroughly and then run some tests on it man 400 terabytes is heavy though oh and with software raid an 18 Drive raid six or two nine Drive raid sixes but one thing you might notice is a problem is that with the 18 drives I&#8217;m not putting the OS on those discs those are just going to be data drives there are internal USB ports in here I think I actually don&#8217;t know well anyway it doesn&#8217;t matter if they&#8217;re internal USB ports I have booted servers like this off of USB sticks internally I like to have all my discs externally accessible both for ease of swapping out and just to see the little blinking lights it&#8217;s a thing I have so you might notice there&#8217;s a 5 and a qu Drive Bay up here unfortunately unlike Dell 740 XDS they and maybe even some regular 740s this does not have any Drive Bays on the back I don&#8217;t think you can even spec that with their Tower line so what I did get is an icy Dock Drive cage which holds six 2.5 in drives this comes with a couple of fans on the back and it fits into one 5 and a qu Drive Bay which non- coincidentally is of course what we have up here it&#8217;s meant for an optical drive when you order it from Dell that&#8217;s usually what they would throw into that slot but this will go in there instead and this is nothing fancy there&#8217;s no Active Electronics in here it&#8217;s just a pass through it has two set of power connectors in the back I think it&#8217;s just connected like one of those for triplet of drives on either side and as you can see it has six SATA ports so those are just passrs um it&#8217;s a there&#8217;s just a dumb back plane back there the only other thing you might notice is this switch labeled HLA which I believe is the fan mode switch high low or Auto I&#8217;m heavily considering just taking these fans off completely because as I&#8217;ll show you later inside the chassis of the t640 there&#8217;s a fan right behind the 500 Quin Drive Bay that should pull air through this at least relatively nicely and I believe I&#8217;m only going to use the middle to Bay for the 2os disc I don&#8217;t think I&#8217;m going to use any other Drive bays in here so those will have decent air flow and a good amount of air gap between them and any other surface in here and plus they&#8217;re only they&#8217;re they&#8217;re going to be used like almost never you know like some log files are going to get written to them and they&#8217;ll really only get red from when the thing is booting up which will be extremely rare so these ssds are not going to get a lot of use so I don&#8217;t really care about thermal properties too much and just in case you&#8217;re curious here is actually doesn&#8217;t have the model number but that that&#8217;s what I got anyway there&#8217;s no model number on here and I was just assuming that that is fan speed high low Auto yeah high low Auto um high is 100% low is 60% and auto is the fan will start at low setting at increased speed depending on the hard drive temperature well who&#8217;s putting hard drives in this now if the hard dis Drive exceeded over 40° Centigrade room temperature 25° the fan will increase its speed to maximum I don&#8217;t know where the temperature sensor in here would be but like I said I I think I&#8217;m just going to omit the fans entirely that&#8217;s because these do not look like particularly high quality fans so they&#8217;re likely to stop working anyway or at least make annoying noises eventually if not immediately and noise is another thing that highly recommends this chassis um I ran it through that memory test even under load super quiet quieter than any of the servers behind me in that rack and I&#8217;ve used this is my first t640 but I&#8217;ve had a t is that can you see it behind me no it&#8217;s on the top of this rack it&#8217;s not in use right now is a t620 I had t710 before that and even a t605 point is that the tower servers are generally much much quieter than the rack servers and that&#8217;s by Design because these are really for a more small to mediumsized business Market where they might not have a dedicated data room or even closet and this might be in the same room as employees so it&#8217;s highly desirable for this to be nearly silent I mean for those of you that are really particular that want like a zero DB gaming chassis this wouldn&#8217;t satisfy you it&#8217;s not like completely silent but it&#8217;s got you know these are bigger fans than you would find in even a two rack unit server and they&#8217;re just a bit chunkier as well so they can move more air at a lower RPM yeah a lot of fan blades and well this is what they look</p>
<p>like actually what manufacturer is this oh foxcon and even for Server Chassis uh fans do make a huge difference the big file server back the big file server back there it&#8217;s a super micr chest the fans it came with were super loud and annoying I think they&#8217;re even a little smaller than this they were like the type of the size of fans you&#8217;d find like a 2 Ru system because it&#8217;s basically two rack unit systems kind of stacked on top of each other in one shell more well more or less anyway point is fans were super loud and annoying I replaced them with this brand which I can&#8217;t remember but this is yeah and I didn&#8217;t modify anything the BIOS didn&#8217;t modify anything in the fans it was just a different brand different model of fan much quieter I mean like probably makes 20% the noise it did originally but air flow is still good uh temperatures didn&#8217;t go up and manufacturer I had that experience with a Dell server as well it shipped with some other brand of fan I replaced it with Delta fans that were actually speced by Dell they even came with the right uh pin out and the right connector on it for hot swapping this was in an r90 five I believe and again made the system much quieter it was just a different brand different manufacturer different blade design made all the difference in the world even though they were also official Dell Replacements so manufacturers tend to swap out fan brand so i&#8217; I&#8217;ve seen a lot of arguments online like people are like oh that that that server is super loud it&#8217;s it&#8217;s very annoying I don&#8217;t know how you could say that&#8217;s a quiet server and someone else is like no no no I have one it&#8217;s it&#8217;s super quiet it&#8217;s great and then they chalk it up to a disagreement usually about like what each person thinks quiet is you know is a quiet machine 30 DB is it 40 DB is it 50 like what relatively speaking what do you consider quiet I have a suspicion that sometimes they&#8217;re buying the same exact model of server maybe even the same year of production but they came with different fans and they make a huge difference so if you got a noisy server before you go to any crazy modifications check eBay see if you can find some other brands of fan you can usually get them pretty damn cheap if they&#8217;re used you could test them out for yourself yeah obviously I don&#8217;t know offhand which fans the best and which are not but yeah yeah and probably even model to model from the same manufacturer might make a huge difference I&#8217;m not a fan aerodynamicist but I know from empirical experience it makes a huge difference rant over this chassis was nice and quiet quiet enough for my purposes we&#8217;ll see though we&#8217;ll see when it has 18 drives spinning up front generating heat and that heat being sucked through the chassis we&#8217;ll see how much more those fans spool up and how much louder it gets I have a fairly high tolerance for noise if you couldn&#8217;t tell let me turn off the noise reduction on this microphone that&#8217;s what it sounds like down here and I work down here and do all sorts of crap that&#8217;s why my voice might sound a little funny because I got to use a [ __ ] ton of noise reduction even though I&#8217;m using the right mic for the job that&#8217;s why I have this mic I don&#8217;t know 3 cm 2 cm from my face or my mouth hole it&#8217;s it&#8217;s on my face my mouth Hole uh who the [ __ ] calls it a mouth hole it&#8217;s like 2 cm from my mouth so obviously that will be the loudest noise it picks up background noise will be much quieter relative to my talking at this distance because most of the noise is coming from 10 12T back there the point is still too loud down here but not for me I don&#8217;t mind it as long as I&#8217;m not trying to record a video that&#8217;s not an h330 I think this might have an h740p in it actually all right so now uh I guess I&#8217;ll put the drives into caddies into time lapse and uh you guys call them caddies trays or sleds I usually call them caddies even though I don&#8217;t think that&#8217;s the correct term because I like cdrom Cades from back in the day um that&#8217;s my earliest like memory of refering to something like that so I I call them caddies but trays sleds whatever point is this did not the server did not come with them so I&#8217;m hoping I have 18 of them uh that fit the style chassis I&#8217;m pretty sure I do and weirdly the t640 uses the previous generation style of trays even though the r 740s i have use the uh newer style of tray don&#8217;t know why so I need 18 of these guys at least I&#8217;m 90% sure I need 18 of these guys yes 18 of these oh that&#8217;s one two 7</p>
<p>that&#8217;s actually just about perfect I think cuz I think all I got is E7 but I have one more trick up my sleeve I&#8217;m just whipping out my cell phone for Simplicity and oh boy does not like my fancy blue lighting um this that&#8217;s not my label this is how it came from an eBay seller</p>
<p>but not using this chassis right now and it&#8217;s got 12 trays perfect and this is a dr4100 in case you&#8217;re curious which I think is a r 730 XD pretty much woohoo and that&#8217;s 12 more okay so that&#8217;s actually 19 right there but this way I know I have some let&#8217;s say not actual Dell uh you know knockoffs from eBay floating around here and those I had a I had a bad run with those where the spring that retracts the Button had a uh clip that would snap off and then the button would never retract and would just stay down and this thing would just flap like that and you could kind of cool it back into place but that was a pain so if any of these are defective at least I have one extra and some of them are a little Dusty these were some of these were used in my home environment obviously not the ones from that server I just took them out of those are pretty clean so that&#8217;s</p>
<p>good oh yeah I need screws as well fortunately I have a relatively comprehensive collection of screws from over the years and I think yeah I bought a bunch of these in bulk for mounting 3.5 in drive to sleds these are kinds with the fluted heads um nope oh there&#8217;s our Visa</p>
<p>Mount standoffs I was hoping I had more I got more down there I could probably pick through it because I don&#8217;t think this is going to be enough I got 18 drives times four screws a piece is 72 screws and that does not look like 72 screws it says 424 uh times 3.5 in but I don&#8217;t think this bag is full I think I&#8217;ve already used some of these Anyway come on Scott let&#8217;s get this over with in the time lapse you&#8217;re you&#8217;re probably going to see me putting three screws in each tray CU look this isn&#8217;t an Enterprise environment I don&#8217;t give a crap as long as you have one screw and one screw on the end where the connector is by and large it&#8217;s going to be fine I&#8217;ll probably I&#8217;ll put three you know two on this end and then one either here or here and it&#8217;ll be more than enough to hold the drives and whatever oh and I almost forgot another thing you&#8217;re going to see me do is label the drives because I&#8217;m going to be using software raid the LED indicators on the front of the chass seat you&#8217;re not going to tell me which Drive is bad if a drive goes bad I&#8217;ll need to determine that through software and I&#8217;ll need to either look for the serial number of the bad drive or see which serial number is absent from the list of drives or you know whatever you can see here these are both software rate arrays actually so is this one and yeah those are the drive serial numbers really handy when replacing them I mean another way to determine which Drive is bad just to use the array and see which activity light is not flashing and that&#8217;s probably your bad drive but only probably so you&#8217;re better off knowing for sure interesting these say date of manufacturer 26th of September 2023 this video is being recorded in May of 2024 so less than a year old at least that&#8217;s probably their refurb date but still that&#8217;s a good sign I&#8217;ve actually had in my life very good luck with refurb drives from both Western Digital and Seagate in fact I found that Seagate refurbs tend to not fail but Seagate regular drives do and I know we can argue about which is better Western Digital or seate personally and back Blaze statistics back me up on this um generally speaking seates consumer low-end drives are much worse than Western digitals drives that being said seates Enterprise Drive Line which I think these are is usually in my experience anyway topnotch but all I can say from my personal experience and you know you could argue this all you want in the comments in my entire life I&#8217;ve had many more Seagate failures than Western Digital failures even though I prefer Western Digital and usually buy those so in other words I&#8217;ve owned more WD drives but have had more seates fail in other words I&#8217;ve done more Seagate r rmas or just you know throw them in the trash than Western diges even though I&#8217;ve had a lot more Western diges over the years and hdst also really good uh both before and after the Western Digital acquisition so the way I like to label them is on the first line I put 20 terabytes because eventually years and years from now 20 terab drives might be harder to get than like 22 or 24 terab drives so I might have a mix of capacities even if I&#8217;m only using 20 TB per Drive in the array and then the serial number is pretty short so I&#8217;m just going to put the whole thing on two lines and then the only irritating thing is this label maker does look it&#8217;s my favorite label maker or at least these are my favorite labels but it does waste a lot you can&#8217;t control the margins you can&#8217;t control how much extra it ex fuds unlike some other label makers which are a lot more uh generous with the amount of label you actually get to use so I got to cut off the extra margin make it small enough to fit on the drive tray and then they&#8217;re oriented like this in the chassis so that is going to go roughly there and obviously I don&#8217;t care about fitting it within the margins of the original Dell label like no I want a nice big legible marking on there even though it got a little [ __ ] up there but whatever I usually just do a quick double check uh zx20 yez zx20 yez because if you get it wrong in the label that can cause real confusion when you&#8217;re going to replace</p>
<p>it not the most ergonomic way to do this but and voila only 177 more to go populating a large array like this and having to put all these drives into</p>
<p>caddi is both something I dread doing and something that I sort of weirdly enjoy doing it&#8217;s like very calming very Zen relaxing in a way so yeah let&#8217;s uh skip to the time lapse and some music and uh be done with this in a jiffy from your perspective</p>
<p>d</p>
<p>[Music]</p>
<p>[Applause] <strong>Ed. note: There is no applause in this video whatsoever.  WTF?</strong></p>
<p>[Music]</p>
<p>[Music] and that&#8217;s the last one how long did that take I don&#8217;t even know it&#8217;s on the screen though because the video is keeping track well it&#8217;s not the best view in the world but let&#8217;s at least see the last one</p>
<p>Ah that&#8217;s satisfying okay so now we got 18 discs obviously I got two more drives down there in that box those will be spares I&#8217;ll test those in a separate enclosure like a USB you know what I mean anyway it&#8217;s already getting kind of late here I really just wanted to get these drives in the system so I can get them testing overnight like I said testing is just going to be building an array so I&#8217;m going to boot off love a live version of a buntu off this USB stick configure The Raid array like I said all 18 discs get it set up as raid six and get started building and just leave it overnight tomorrow I will put in the OS discs show you around the inside of the chass in case you&#8217;re curious and um [ __ ] one more thing oh we&#8217;ll test power consumption I I didn&#8217;t test it beforehand which is probably dumb but what I can do is just sort of like you know half pop out all the drives just pull out like that test the power consumption without the drives then test it with the drives so yeah let&#8217;s get this uh get this full booted up</p>
<p>oh you can&#8217;t see it from your angle but right I took the side panel [Music] off oh yeah what we got to do is change change this to HBA [Music] mode and I&#8217;ll have to set up the ID leader but like I said this is just to get the machine up and running enough to build a raid array and to make sure all discs are recognized actually [Music]</p>
<p>uh block probing did not discover any discs oh actually I remember reading online somebody somewhere said that you need to cold boot the system in order to uh get the RAID controller to switch to HBA mode I&#8217;m hoping that&#8217;s the problem um I&#8217;ve pulled power cables yeah there they are so let me try booting [Music]</p>
<p>[Applause] again uh oh terminal there we go g honestly I I don&#8217;t really run Linux servers with a guey like ever so this is</p>
<p>awkward</p>
<p>Yeehaw H oh good that is installed at least</p>
<p>okay uh s must be the USB stick right</p>
<p>so yeah good okay just want to make sure now it&#8217;s usually wise to partition the dries first but since this just for testing I haven&#8217;t decided what I&#8217;m going to do with this array ultimately um I&#8217;m just going to I guess it doesn&#8217;t matter if it&#8217;s level six in that case but whatever oh oops that&#8217;s supposed to be I&#8217;m an idiot equals</p>
<p>18 e okay well that&#8217;s cool I mean I guess this resolution is good for video so you guys can read this but man um okay 0% complete clean resyncing 16</p>
<p>devices by the way this is just in the difference between these two is whether you consider a kilobyte to be a bytes or 1024 bytes and so forth all the way up to terabyte that&#8217;s why those numbers are different but it&#8217;s it represents the same amount of bits ultimately well this is going to be useless at this resolution right um yeah because it&#8217;s cutting off the freaking</p>
<p>percentage well that&#8217;s not so bad then and then I will just leave this up all night uh just for the record it is oh good timing actually so I started that just about 300 a.m. um you can see the system time says 7 a.m. on May 19th it&#8217;s 3:00 a.m. New York time May 19th obviously I didn&#8217;t set up the time zone on this correctly anyway so yeah I&#8217;ll just uh Leave This Server running overnight and we&#8217;ll we&#8217;ll see how far that gets in the next 10 12 hours or so oh and I almost forgot Blinky lights</p>
<p>oo oo I&#8217;m I&#8217;m looking at a screen over there that&#8217;s why I&#8217;m</p>
<p>yeah that&#8217;s very nice so until tomorrow I&#8217;ve been Scott still be Scott tomorrow I don&#8217;t know why I said that bye sometime as is now past just a quick update here you can see the LEDs are still blinking it&#8217;s been I think 23 hours since I started the array build and it is 48% complete so we&#8217;re probably looking at about 2 Days full 48 hours is which is actually a little faster than I thought it would be which is</p>
<p>nice uh let&#8217;s go 10 oh my</p>
<p>God there we go so yeah it&#8217;s writing at about 1110 megabytes per second so yeah a little faster than 100 megabytes I gave a credit for originally so that&#8217;s cool I kind of just wanted to check the performance of this operation and by the way that&#8217;s 110 megabytes per dis which is about 2 gbt per second uh from through the HBA which is really good that&#8217;s like 16 gigabit uh what kind of load is this putting on the system 100% CPU so it&#8217;s CPU bound but it still seems like it&#8217;s probably maxing out the discs as well which is kind of improbable but well anyway just want to take a look at that as it&#8217;s going um I&#8217;ll check back in either tomorrow or probably the next day because it should finish about 26 hours from now maybe and it&#8217;s going to be super late at night then so once this is done we&#8217;ll take a look at the chass itself install the OS discs install an OS and get it actually syncing the uh files from the server oh the other thing I wanted to see was</p>
<p>uh</p>
<p>uh forgot oh</p>
<p>yeah yes that is one of the discs obviously uh what is it d a for everything yeah that&#8217;s what I wanted oh frig well that&#8217;s not</p>
<p>good all right well then if that random ass Reddit post I just looked at is correct then it&#8217;s actually zero yeah I knew it was something with the seate drives okay yeah okay okay okay cool now these are referb drives so who knows what their situation was be before I assume this was factory reset um I mean the 28 hours is probably just me I&#8217;m assuming these came with zero hours on the clock so uh yeah okay we&#8217;ll we&#8217;ll just stick with that I&#8217;m back it&#8217;s been a few days cuz I got busy with work and you know real life stuff but the array did finish building successfully uh the drive seemed to be fine still not seeing any Smart errors everything looks great so fa confident these drives are good drives but only time will really tell for that all right get the keyboard and stuff out of the way and then install the ssds in the 5 and a/4 inch Drive Bay in the IC do whatever it&#8217;s called oh and to go into this enclosure I just got two old ssds I haven&#8217;t been using for a while this one&#8217;s 250 gig and this one I think is 120 or8 gig it doesn&#8217;t matter if they&#8217;re different sizes I really don&#8217;t need more than like a couple of gig for the OS partition anyway so you know cuz I&#8217;m just going to run these in software raid one and I&#8217;ll just create a 120 gig uh boot drive you know the various partitions boot partition and the uh OS partition and I guess swap too why not have some swap even though it doesn&#8217;t matter</p>
<p>this enclosure is Metal by the way it&#8217;s not a plastic shell which I like a lot and it kind of Clicks in in the front like it&#8217;s got these two little</p>
<p>pins that go into the screw holes on the bottom and then by the connector that&#8217;s where two actual screws goes I guess I&#8217;ll use the included screws rather than digging into my own stash oh I&#8217;m getting ahead of myself one thing I&#8217;d like to do is get the serial numbers off the drives just so if they do die and I have to replace one of them I know which is which I could just call one I could just label one sand disc and one PNY and that would probably be enough but you know by tradition I go with serial numbers so I&#8217;ll stick with</p>
<p>that and and you know what I&#8217;m actually going to print two copies of each because the drives are going to be obscured on the bottom by the case so this way I can put one label on top of the disc and know which is which because I do want to be able to fit on the front of one of these bays and uh that&#8217;s kind of silly yes so I use that as the top label for it I&#8217;ll cut it off to oh I have all the same amount of digits as that one hey I got the same amount of digits well roughly I mean the font spacing is little the it&#8217;s not fixed with font so who knows and now I don&#8217;t care about these SSD specs in particular um I&#8217;m not concerned with performance because this is just going to be used to boot and write a few small log files too I&#8217;m more concerned with reliability and of course with ssds long-term reliability comes down to right Cycles but these drives are also not going to be written to a hell of a lot so it doesn&#8217;t matter if they&#8217;re a little old and used I mean they&#8217;ve been gently used I don&#8217;t think these are um strongly used</p>
<p>drives but the point here is just reliability from having a raid one just in case one dies of random causes not so much due to fatigue and there we go now I&#8217;m not going to apply the label to well actually you know what I will because I&#8217;m not worried about well no it&#8217;s GNA make this not pop out easily because it&#8217;s thicker than this and it&#8217;ll obscure the drive LED I&#8217;ll put this on the uh Drive Bay below this one because I&#8217;m only going to have two drives in this enclosure yeah why why is it like sticking out maybe I should have uh that&#8217;s as far back as the uh Drive will go in the sled it&#8217;s still it&#8217;s still proud of the ones without drives in them but I guess those can just sink deeper I don&#8217;t think that&#8217;s a problem as long as it&#8217;s seated in the connector that&#8217;s all that really matters</p>
<p>I&#8217;m such a spaz I put the label on the wrong I had it upside down the whole</p>
<p>time I wonder how many of you noticed that like could I have just edited that part out and then no one would ever know who knows now of course if I was fully populating this unit with drives uh it would be a bad idea to label all them this way and block up all the air flow you know even if they&#8217;re mostly going to be read drives it will generate some heat I think we&#8217;ll still get some air flow off these top vents yeah I&#8217;d probably figure out an alternate way of labeling for example I have an icy dock enclosure similar to this behind me well you guys can&#8217;t see it but it&#8217;s right about there yeah there it is and as you can see I just labeled all six drives next to the unit and that worked out fine now as for this again I&#8217;m concerned about the reliability and noise of these fans so I&#8217;m thinking this Dell chassis has a lot of fans in it and has pretty good air flow front to back and these drives are not going to be working terribly hard anyway oh those are just some chewy screws those are not going into metal at all I can guarantee these are going into plastic just from the feel of them not saying the whole thing is metal but like uh yeah like this whole back plate from this seam wrapping around just to here is all plastic yeah I mean they don&#8217;t they don&#8217;t feel great they don&#8217;t feel super cheap like I&#8217;m not saying they&#8217;re awful I&#8217;m just saying that the Dell server has a whole ton of fans in it and why not let them do their jobs and the reason I&#8217;m completely removing the fans is just so that there&#8217;s a bit more room for airflow without the fan blades obscuring part of that when they&#8217;re not moving so so whatever air flow chassis does pull through this um you know it&#8217;ll be easier to pull the air through I guess also it makes cabling a little easier because we don&#8217;t have the fans right up tight next to those SATA connectors I think it&#8217;s the right move for this</p>
<p>application and of course we don&#8217;t throw out screws ever and then these are presumably the screws for mounting it in the 5 and a quarter Drive Bay I guess I don&#8217;t know why this why are there so many screws wait what does it think these screws are for 10 pan head screws for device yeah I guess they want you to put 10 screws to mount this thing one two uh three four yes there&#8217;s eight but oh it says two oh I missed that it clearly says two for spare parts but still I I don&#8217;t know who would actually put in put this in with five and the really short for getting past like the sheet metal of the case and then actually getting a bite into this unit so I don&#8217;t know man I&#8217;m just not uh not going to use them oh my God no this is thing&#8217;s heavy now I mean it was slightly heavy before I put the drives in it but now it&#8217;s it&#8217;s freaking massive but uh so I might before I bring this upstairs to the second floor I might take out all the drives I&#8217;m sorry my uh hands are so bright oh actually yeah I&#8217;m sorry my hands are so bright but I need to up the gain on the camera to make the interior of the chassis visible to you guys because obviously I have most of my lights on the other side and perhaps unwisely not so many lights on this side but you&#8217;ll get the idea hopefully so we got these uh pcie card retention brackets it&#8217;s for tall cards they don&#8217;t rattle around uh not really necessary for me just uh I&#8217;ll probably just omit</p>
<p>those all right so of course up here we got the 5 and a qu Bay and this should be tooless to remove this front plate yeah there we go basically this is just a dummy bezel like a drive bay cover but really large and as you can see it has these screws that I have a bit of a standoff of unthreaded uh length and so we&#8217;ll be using these screws to mount the icy</p>
<p>dock</p>
<p>there and so it&#8217;s two on this side oh and so it&#8217;s two on this side and one on that side and now these interesting but fortunate for me these are the fine threaded screws the type you&#8217;d find on like CD ROM drives and stuff uh not the old school 5 and A4 inch Drive Bay like where they use the coarse thread screws so because fine head screws is what this uses they&#8217;re fine thread not fine</p>
<p>head oh that&#8217;s why the screws were so short that I came</p>
<p>with is these are too long the problem is these are sort of specialty screws now you can use normal screws and just not drive them in all the way but let&#8217;s see if I have a shorter version of those exact screws yeah I got more of them but the uh the threads look like they&#8217;re about the same length yeah they&#8217;re identical so uh and I&#8217;m wondering if this one that I did put in is so long that it would interfere with this bottom Drive Bay but I don&#8217;t really care because I&#8217;m not using it so the problem is these in the back are digging into the plastic bezel back here and might even yeah I don&#8217;t think I can drill deeper because it might hit the side of one of these SATA ports so that that&#8217;s not B that&#8217;s not a problem I mean back in the days of mechanical hard drives or Optical drives you&#8217;d want to make sure that your drives are locked down in a fairly sturdy fashion so that they don&#8217;t vibrate which could cause problems you want them really uh connected to the chassis quite sturdily but with ssds I mean who cares they&#8217;re not going to vibrate and even the chassis experienced vibration who gives a [ __ ] it&#8217;s not going to interfere with the operation of an SSD in theory it could could shake a port loose or something but let&#8217;s face it this computer is going to be sitting on a shelf in a cabinet and barring any earthquakes which we had recently in New York actually um it won&#8217;t be an issue I know I&#8217;ll admit it&#8217;s kind of a weird angle to show this to you but uh yeah there it is it doesn&#8217;t look half bad in the front of this chassis and uh it is a bit loose like it&#8217;ll just wiggle around a bit but it&#8217;s not going anywhere it&#8217;s not like going to pop out arbitrarily cuz it&#8217;s locked in by that screw being encumbered by this bracket as far as cabling goes I think I can take this out with the fan still in it yep these fans are modular from the air shroud but you can just pull the whole thing out that&#8217;s uh basically the gist of it mostly guides air over the CPUs and RAM but of course there are four fans in fact you can take out this entire</p>
<p>module and that&#8217;s what I was talking about when I said there&#8217;s good air flow through the chassis because it also has these four fairly beefy</p>
<p>fans we saw them earlier and oh I should point out air flows that way so when it&#8217;s situated like that in the chass see you can see this top fan does provide some suction through the 5 and 1/4 in Bay here we have the perk</p>
<p>H730 h740 yeah it&#8217;s just sort of sitting in there I don&#8217;t think it&#8217;s even yeah it&#8217;s not even locked down by a bracket or anything which is weird I think this is missing something we&#8217;re discovering things together because this uh the fan troud has this extra bracket coming up which holds the card in place so the card is not just freewheeling in there it&#8217;s actually a not prevented from going oh no it is prevented from going up there&#8217;s a notch there and then this lip holds it it keeps it up from Against Gravity so great so it has two SAS cables going up and over and then onto the back plane over here which of course holds the 18 drives and I think has a SAS expander under this heat sink and some beefy power cables going to it yeah I don&#8217;t know if I mentioned but this system did come with ah it&#8217;s Overexposed damn it it&#8217;s a lot of focus but 1100 watt psus 1100 watts is probably Overkill even for 18 SATA drives maybe for like 18 15K SAS drives or something it would be uh appropriate but and these uh silver CPUs that are relatively low power I think they have an 85 watt uh thermal design power yes these aren&#8217;t going to take too much power the drives will take the most power out of the whole system probably but it&#8217;s not going to be a lot but better to have over capacity on your power supplies than under capacity is my feeling so anyway yeah that&#8217;s the inside of the chassis this has uh 64 gigs so it&#8217;s got four 16 gig sticks and I I guess just for completeness the back of this is reminiscent of my Precision t79 7920 um and most of the that line in that it has four pcie slots here then the usual ports and power supplies and at the bottom four more I&#8217;m sorry five more pcie</p>
<p>slots so dual power supplies which for me is a must I like to put two UPS&#8217;s on any system like this um and also obviously in case one PSU dies this came with a Windows 2019 server license which I&#8217;m not going to use dedicated idra Port I think this came with Enterprise ID which is not terribly important uh four USB 3 ports two USB 2s I assume two uh 10 gig NX I I know says gigabit there but yeah they&#8217;re they&#8217;re 10 gigabit and obviously VGA and seral oh and should point out these bottom slots are labeled CPU 2 I I should imagine that means these would not be active without the second CPU populated I&#8217;ve never bought one of these I never bought a dual CPU system that didn&#8217;t have two CPUs in it so actually that&#8217;s a lie my Nas did but that wasn&#8217;t a dell and it didn&#8217;t have the same pcie configuration so yeah anyway all right and finally it&#8217;s still really too dark to see it up there but Behind These SAS cables are two SATA ports one&#8217;s already populated with a cable that just goes to the optical drive bay the other one&#8217;s unpopulated I&#8217;ll obviously stick my own Cable in there and wire it up to the back for those two drives the only thing I foresee as being an issue is do we have setup power oh yeah yeah it&#8217;s buried in there I have zoomed Us in so yeah there are the SAT uport it&#8217;s up here have the other end of that Sata cable and there is a man that was really tight there&#8217;s a retention clip in there that was hard to get to stic cables nice and long and I see a a power connector in there there we go Liberty and does it have a little extra it&#8217;s only the one though of course because this is only really supposed to have one Bay oh it also has an a small optical drive power cable uh this type yeah it says in order of function properly connect both 15 pin power connectors to the enclosure when using the device if I put the drives vertically on top of each other yeah I probably could have got away with one but you know what this has an angled connector anyway which isn&#8217;t ideal for putting in there so let me get a uh splitter all right I&#8217;ve got a lot of sat cables my SAT cable bin but here we go it&#8217;s a uh male to two females voila and then I&#8217;ll just sort of Tuck this all neatly up in there eventually</p>
<p>yeah it&#8217;s a locking connector that&#8217;s cool it&#8217;s longer than it needs to be but that&#8217;s okay it can just sort of curl up in there you know what it&#8217;s an absolute nightmare trying to get my big fat hands in there so since I got plenty of slack on these</p>
<p>cables I&#8217;m just going to connect them outside the chassis and then slide it back and it doesn&#8217;t matter which sat of connector goes to which drive but I am going to put HDD oh it&#8217;s labeled od0 which is the blue cable on the left side when you&#8217;re looking at the chassis from the front and that&#8217;s just for my own</p>
<p>neurosis and I guess technically it will help with troubleshooting or something yeah won&#8217;t matter it it won&#8217;t matter at all anyway there&#8217;s all the connections made up to the back of this unit all the connections we&#8217;re going to use anyway and then just gently guide the cables out of the way and click a lot of people would absolutely despise this mess and not be satisfied with it um at least for now I&#8217;m going to leave it like that make sure the front drive bay Works before I try tucking all these out of the way and to be honest with you I&#8217;ll probably leave it like this because it doesn&#8217;t matter it really doesn&#8217;t there&#8217;s still plenty of room around these cables for air flow like there&#8217;s not a high density of cables there so air is still going to be able to flow through this just fine I&#8217;m sure a lot of you are screaming at the prospect of it being left in this state but I kind of like that idea of someone out there feing over how awful this is and yet it working reliably anyway for years and years and years because what is literally going to happen inside this chassis with all these messy wires if no one disturbs it and I&#8217;ll be the only one who could possibly disturb it oh I&#8217;m a dumbass though because this does have to be tucked out of the way at least this cable does for the fan shoud to go back in place cuz of course that&#8217;s the other thing we need to ensure is that none of the fans get fouled on loose tables either like that can be bad that can be operationally problematic and I&#8217;m half kidding cuz that is why you want to keep your cables neat inside your chassis amongst other reasons anyway let&#8217;s see oh that&#8217;s actually yeah that&#8217;s got to go up and</p>
<p>over yep not pinching any cables none the cables are anywhere near the front of the fan which has this guard on it anyway so wonderful oh I&#8217;m an idiot this card is retained by the fan</p>
<p>troud and so thus concludes my physical assembly and rray builda of this 18 drive time 20 tbte massive Beast of 360 ter of raw storage um will be less once I decide how many parody drives I&#8217;m going to have in total in other words how many arrays I&#8217;m going to have in total because it&#8217;s going to be running raid six in either case and uh yeah the rest of it is just going to be loading an operating system on it I might just use a buntu I&#8217;m not sure just because it&#8217;s a long-term support and it&#8217;s just well supported and reliable and what I need ultimately from this machine is reliability I&#8217;m going to do a minimal install no gooey nothing fancy and the only things really loaded on this machine other than the really minimal OS and and tools is going to be um some scripts that I&#8217;ve already written for my other backup servers that&#8217;ll just modify slightly for this one and my backup strategy with these type of backup servers mostly consist of as I said earlier this is a client it&#8217;s not really a server technically um other than SSH which I have firewalled off so only I can access it from one of my machines down here has a private key on my machine and a password so it&#8217;s pretty secure in that regard that&#8217;s the only thing this will serve otherwise this is just a client it has an smv client an NTFS not NTFS client an NFS client and using rsync it just synchronizes other servers pulls its data down to it sometimes locally sometimes over a VPN if it&#8217;s a remote server that&#8217;s backing up and um yeah that&#8217;s basically it it it&#8217;s quite simple the only couple of mildly interesting things the backup scripts do is they retain uh for example if I&#8217;m backing up uh VM snapshots especially from offsite it will retain the last however many X number of snapshots I usually have it set to 30 so it has 30 days worth of snapshots of remote machine or a couple local machines too actually and those are just gz raw dis images taken from a snapshot when the backup begins so it&#8217;s kind of like backing up the machine in a crash state but you know 99% of the time that&#8217;s fine I do also back up my databases and other important applications at the application layer um in other words like I&#8217;ll use my SQL dump to dump all the contents out of my MySQL databases gzip those up and also get those backed up to this and other back servers uh it&#8217;s just much easier if like even though the machine is in a crash state if I can restore the whole VM image rather than having to do a a SQL Restore for example the SQL restore might take longer than just copying over a gzip file to the local server and then unzipping that and there you go so it will retain a few snapshots of whatever information I&#8217;m uh putting on it and for example with my file server at least for my main most important documents my programming projects um video projects that is the actual project files not all the raw video all of that also gets R synced over to one of these servers but then after it&#8217;s R sync successfully that night it gets then gzipped but not deleted just gzipped as a snapshot of that data and then that&#8217;s stored in a separate directory on the same server on the same backup machine the point of that being that this way if I get some kind of hor malware that either deletes or encrypts my files and then rsync decides to Oh all these files are changed and then sync over the encrypted files to the backup machine well that renders the backup machine kind of useless so I have these we&#8217;ll call them snapshots that I&#8217;ve just they&#8217;re just tar gzip of directory structures that are a point in time and I retain 30 days worth of those as well so yeah if I go 30 days without noticing uh a rans somewhere attack then yeah okay all those backups will be wiped out but for my most important data that&#8217;s how I do it I keep point in time backups uh usually daily for most of my important stuff and less important like uh virtual machine images like some of my personal servers that I don&#8217;t really modify or use that of or you know change data on that often uh those might get snapshotted weekly anyway the point is it&#8217;s not fancy but it&#8217;s reliable and so the reason for this giant ass machine being in my house on a different floor is twofold I mean one it back UPS it backs up video files which can be quite massive and be quite timec consuming to transmit over the Internet which is the other reason the inverse of that is if I need to restore the files from this machine when it&#8217;s in situ upstairs it&#8217;s going to be connected via a gigabit Ethernet connection but I could easily bring it downstairs and connect it to a 10 GB ethernet connection for the restore and I could even reconfigure this as a file server and then just access my files off of this machine and build a new backup server in the meantime you know what I mean it&#8217;s good having on premises backups like this just because it&#8217;s much faster for restores and that&#8217;s I would forewarn everyone if your backup strategy is exclusively backing up to the cloud you got to remember that in an urgent situation where you need to store your data if you have terabytes of data how long is that going to take just to download let alone like if it&#8217;s application Level backups will don&#8217;t actually put those backups onto each server they need to go on to yeah if it&#8217;s going to take you a week just to download all the data let alone restore it to various servers or whatever even if it&#8217;s just for home use like if your entire video libraries in the cloud and then you need like terabytes of it back for a project you might have to wait a couple of days I mean depending on the speed of your internet connection so I&#8217;d always recommend backing stuff up locally and remotely because of course if my house burns down if burglars actually manage to carry all this [ __ ] on their backs out of my house which is you know daunting I mean it&#8217;s daunting enough for me to manage all this stuff but yeah I mean that could happen so you have to be prepared for that too so offsite and on-site backups I think are the best move and the other thing about Cloud backups is that they&#8217;re so hard to verify like how like how do you know all your files truly exist on your cloud backup providers hard drives and they&#8217;re not just showing you you know you might log in and see here&#8217;s a list of all the files you have backed up and here&#8217;s all the versions of all those files we have backed up and oh it&#8217;s glorious look the files they&#8217;re right here but they could just be showing you entries in their database that don&#8217;t necessarily correlate to any actual files on any actual file system anywhere in their data center like not on purpose I&#8217;m not saying they&#8217;re a scam I&#8217;m just saying you don&#8217;t know how well their entire system is managed really like how often are they verifying the files that they&#8217;re displaying to you that they have logged in their database of files are actually stored somewhere because I&#8217;m assuming they abstract their file information from the actual file storage because otherwise I feel like it&#8217; be a nightmare storing file files on all sorts of different servers and all sorts of different dis packs and then like trying to find them arbitrarily like there has to be some centralized Master list which is probably not a file system in anyway even if it is a file system how often is that file systems contents verified that exists and is in one piece you know in other words my point being you could have a catastrophic failure and go oh my God I got to go to my cloud backup provider and get all my files and then it could turn out that some or all of your files just aren&#8217;t there even though you thought they were and even if every now and then you&#8217;re smart and you go in and download a couple of arbitrary files here and there just to make sure they do exist I mean that verifies some of them but you don&#8217;t know that all of them exist unless you go and download all of them and look at them yourself which could take days or weeks depending on how much data you have stored and depending on the speed of your internet connection so at least with this first of all I know how it works I know there&#8217;s no abstraction between like the file system and the files that are stored I mean there is internally to the file system I suppose if you want to be pedantic but you know what I mean there&#8217;s not like a separate like I don&#8217;t know MySQL or couch DB instance that storing list of files expecting it to cor to uh correlate with file systems on various servers throughout their data center that&#8217;s really what I&#8217;m talking about when I&#8217;m talking about abstraction in that sense so I just feel more comfortable able having my data under my control and me knowing exactly how it&#8217;s stored and how it all functions end to end it just makes me feel better it&#8217;s more of a hassle definitely more of a hassle and more expensive in the end I mean back Blaze which is a backup provider I do recommend to friends and family and I have used it for other purposes as well um I do like back Blaze I I particularly like their transparency and their Drive stats that they publish so yeah not not not this is not a paid endorsement they didn&#8217;t sponsor this video or anything I give a crap whether you use them or not but uh generally speaking I do like them it&#8217;s just because they&#8217;re a cloud provider I don&#8217;t trust them not because they&#8217;re bad just because they&#8217;re not me but anyway point is back blaz subscription would be far cheaper than what this server costs and I&#8217;m pretty sure they offer unlimited storage to this day so in theory I could back up everything to back blazs for much much less money than this thing costs and probably for less money per month and this thing will cost a run in utility bills maybe but then again most of you probably don&#8217;t need this kind of absurd level of storage um honestly if you want something low power and quiet hook Raspberry Pi up to a couple of USB ssds and stick it in a closet somewhere you know assuming that&#8217;s enough storage for you um it&#8217; be dead silent not use much power it could do its thing same sort of setup architecturally as I have here and you know your data is it&#8217;s in your closet and also offsite somewhere I mean back up to a cloud Prov for your off-site needs back up to a cloud provider I&#8217;m not saying don&#8217;t do that I&#8217;m just saying make yourself more comfortable and also back up locally and snapshots are very important because ransomware will murder you if you don&#8217;t have snapshots because otherwise you&#8217;re encrypted files will just get backed up and over and potentially depending on how you have your backups configured potentially overwrite your existing backups which is what would happen in my case if I wasn&#8217;t also tar gzipping all those files nightly and storing them in a time-coded timestamped file for 30 days worth of files that&#8217;s part of the reason why I need such a massive amount of space I mean right now I have approximately 150 terabytes of data here on the server behind me to back up which even with four parity drives would fill up less than half of this but there&#8217;s a lot of overhead in that I duplicate a lot of that data by keeping all those snapshots anyway I&#8217;ve done plenty of videos ranting on about backups for way too long but uh thanks for watching um maybe I&#8217;ll post a follow up to this once it&#8217;s up and running if I have any you know what if anything goes wrong I&#8217;ll probably post a Fallout video fixing this mess but uh if everything goes right then yeah this is just replacing two of my current backup servers in that closet which already have been doing their thing and running fine for I want to say one of the the one on the top shelf has probably been doing its thing now for8 years I don&#8217;t think I&#8217;ve upgraded that in a long time and the one in the bottom probably five years and they just hum away and they do their thing and those are made of very cheap parts this in theory should last a long time but we&#8217;ll see anyway I&#8217;ve been Scott uh good night why with the salute I always with the saluting I always [ __ ] salute</p>
<p>does anyone mind the saluting like should I keep doing that that was more of a doing of the cap the [ __ ] is no that&#8217;s a salute I did more of like a cap doing I never wear a hat I don&#8217;t know why I do that tip my Fedora that&#8217;s more like this you don&#8217;t like do you tip your now I don&#8217;t even remember do you tip your Fedora this way or do you tip your Fedora that way who knows it&#8217;s not even a thing it&#8217;s a meme I mean I think that fat actually did wear a fedora in that Meme for his own purposes I shouldn&#8217;t have said fat I&#8217;ll have to beep that out yeah you can&#8217;t make fun of people on the internet anymore though to be fair it would be inappropriate to randomly make fun of people in a video about the [ __ ] backup server right okay now I&#8217;m actually done</p>
]]></content:encoded>
			<wfw:commentRss>http://s.co.tt/2024/06/13/new-home-backup-server-dell-t640-with-18-20tb-disks/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>DATA GRAVE ⚰ Underground Backup Servers</title>
		<link>http://s.co.tt/2022/12/07/data-grave-underground-backup-servers/</link>
		<comments>http://s.co.tt/2022/12/07/data-grave-underground-backup-servers/#comments</comments>
		<pubDate>Wed, 07 Dec 2022 22:38:10 +0000</pubDate>
		<dc:creator><![CDATA[Scott]]></dc:creator>
				<category><![CDATA[Computers]]></category>
		<category><![CDATA[DIY]]></category>
		<category><![CDATA[Videos]]></category>
		<category><![CDATA[backups]]></category>
		<category><![CDATA[computer]]></category>
		<category><![CDATA[DATA GRAVE]]></category>
		<category><![CDATA[outdoors]]></category>
		<category><![CDATA[Qilipsu]]></category>
		<category><![CDATA[video]]></category>

		<guid isPermaLink="false">http://s.co.tt/?p=2257</guid>
		<description><![CDATA[Related video: QILIPSU Outdoor Enclosure with a Computer Inside&#8230; Because. Visit the Data Grave coffins: outdoor.s.co.tt Hi, I’m Scott and today we’re going to talk about a couple of computers I buried in my backyard to create a data graveyard. They’re Raspberry Pies, which are great for this purpose as they’re compact and consume very little power, meaning they can be supplied by power over ethernet and won’t cause rampant heat dissipation issues. But they’re also pretty good for their intended purpose: Backups. If you’ve seen a couple of my other videos, you know I tend to go on rants about backing up data. For me, and many of you, most of the content I generate is digital. Losing all … <a class="continue-reading-link" href="http://s.co.tt/2022/12/07/data-grave-underground-backup-servers/"> Continue reading</a>]]></description>
				<content:encoded><![CDATA[<p><center><iframe width="640" height="360" src="https://www.youtube.com/embed/9hDbz1XfpUM" title="DATA GRAVE ⚰ Underground Backup Servers" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></center></p>
<p><strong>Related video: <a href="https://youtu.be/qegaLn-cvVw" target="_blank">QILIPSU Outdoor Enclosure with a Computer Inside&#8230; Because.</a></strong></p>
<p><strong>Visit the Data Grave coffins: <a href="http://outdoor.s.co.tt" target="_blank">outdoor.s.co.tt</a></strong></p>
<p>Hi, I’m Scott and today we’re going to talk about a couple of computers I buried in my backyard to create a data graveyard.</p>
<p>They’re Raspberry Pies, which are great for this purpose as they’re compact and consume very little power, meaning they can be supplied by power over ethernet and won’t cause rampant heat dissipation issues.</p>
<p>But they’re also pretty good for their intended purpose:  Backups.</p>
<p>If you’ve seen a couple of my other videos, you know I tend to go on rants about backing up data.  For me, and many of you, most of the content I generate is digital.  Losing all my data would be tantamount to someone’s house burning down in the pre-digital age.  All my stuff is in there.</p>
<p>And in fact, house fire, floods, natural disasters, burglaries, war, seizure and all sorts of other catastrophes can lead to the destruction of your data.  Which is why I always advocate for off-site backups, so your digital possessions aren’t tied to your physical ones.</p>
<p>So in that way I was considering alternatives to traditional off-site backups, and I came up with the idea of the Data Grave.</p>
<p>It’s kinda tongue-and-cheek / kinda serious.  Do I think burying your data in your yard is the wave of the future for data security?  No, of course not.  But I do think it’s almost a viable secondary or tertiary backup strategy.</p>
<p>You might consider it your only offsite backup solution if you don’t want to store your data with third-party companies like Google, Apple, Backblaze, and so forth, and if you don’t have a secure alternate location in which to situate a server that’s 100% under your control.</p>
<p>With a deep enough burial, your underground data is likely to survive a nuclear apocalypse… even if you probably won’t.</p>
<p>I should say, this video is a proof-of-concept.  The way in which I prepared and entombed the two computers in their coffins isn’t necessarily optimal.  I’m just testing it out at this point, and I plan on revisiting the project in a year to see how it’s going.</p>
<p>Actually, since you might be watching this video quite a while after I uploaded it, you can keep an eye on how it’s going in real time.   Over a year ago, I mounted a full computer in an outdoor enclosure to see how it would hold up (link in the video description), and for this project I changed out the computer but left the enclosure and the website up.  Go to outdoor.s.co.tt, and that site is served by the computer mounted to the side of my house.  At the bottom of the page are links to each coffin.  The web pages are exceedingly simple, but they’re hosted on web servers in each coffin.  If the web pages come up, then the underground computers are alive.</p>
<p>Technically, the concept is pretty simple:  Use a Raspberry Pi (or other low-powered single-board computer) as a backup-slash-storage server.  Put a massive SD card in it, and maybe even attach some USB drives.  Those drives could (and should) even be put into a software RAID array.  Then bury the whole thing in your back yard.. or wherever.</p>
<p>So, if you’re interested here’s how it all came together.</p>
<p>I started with a couple of Raspberry Pi 3 B plusses.  They’ve got 4-core ARM Cortex SoC’s running at 1.4Ghz, 1GB RAM, four USB 2.0 ports, and an ethernet port that’s going to become extremely relevant.  They’re neither fast nor powerful as computers go, but are more than sufficient for use as a home backup server.</p>
<p>Their low power consumption and ethernet port are extremely important because they can be combined with a power-over-ethernet hat (made by LoveRPi in this instance), meaning they each need only one cable connecting them to the outside world, which will provide both power and data.</p>
<p>It’s called a “hat” because it sits atop the Pi, interfacing with a few select pins for power and ethernet, while not obscuring the CPU or other pin headers.  Well, a hat usually obscures your whole head, but in this case it’s a double entendre and HAT stands for “hardware attached [on] top”.</p>
<p>With the hats attached, I imaged a pair of micro SD cards with the latest version of Raspberry Pi OS, a Debian variant.  Aside from the hostnames, both installations are pretty much identical from start to finish, but one card has a capacity of 256GB and the other only 32.  When using them for backups, I’d probably keep the SD cards small and used USB drives in RAID for the actual data storage.</p>
<p>I chose to install the full GUI, which is a LXDE desktop environment running the Openbox window manager.  The GUI is completely unnecessary for a backup server, but I figured in this case it would be good for load testing, as well as being more photogenic when appearing in a video.  </p>
<p>I also tested out the PoE hats using a TP-Link TPE-S44 ethernet switch with four standard and four PoE ports.  It has a maximum capacity of 15.4 Watts per port, which is more than enough for the Pi.  It consumes about 2 Watts at idle, while the maximum consumption depends upon workload and devices that are attached.  But in any case, it’s much less than the maximum of this relatively inexpensive switch.</p>
<p>To help keep connectors from corroding, my plan was to coat them all in dielectric grease.  This includes the SD cards and slots.  Here I used a 3M silicone product which is intended for automobiles, but should be more than suitable in a Raspberry Pi… probably.</p>
<p>Being as these Pies would run in a truly headless configuration, I configured their VNC servers and tested them with only an ethernet cable attached to verify overall functionality.  And not for the last time, either, as you’ll see.</p>
<p>When mounting a computer outside in the elements, let alone underground, the main issue is water.  I suppose if you live in the desert then maybe it’s less of an issue, but it still does rain on occasion.  Given enough time, water will get into pretty much anything, even a decent quality supposedly “water tight” case.  But an allegedly impervious container is as good a place as any to start, for mechanical protection at least.</p>
<p>So I decided to use a  couple of small Pelican cases.  Both are relatively inexpensive, and both are large enough to fit a Pi with USB drives attached.</p>
<p>But the cases aren’t going to be the only differences between these two.. coffins.  In the yellow one, the Pi will sit in there with only an ethernet cable attached.  In the grey one, there will be dongles to allow for debugging in the future, should the networking fail at a hardware or software level.  I used very short USB and HDMI extensions so a display and input devices could be connected.  In retrospect, extending the mini USB power input port may have been wise, as a failure of the network port or PoE hat could render the device powerless.  Though, I wonder if you can backfeed 5 volts into the Pi via the USB-A host ports?  Perhaps one day I’ll have to find out.</p>
<p>The ethernet port was also extended, but with a theoretically water-tight connector that has an RJ-45 jack on the inside.</p>
<p>Of course, those extender dongles aren’t necessary, unless the actual ports on the Pi are somehow inaccessible.  And indeed that’s what will happen, because the next step is to pot the Pies in with epoxy.</p>
<p>But first I prepared a couple of USB flash drives by removing their cases.  The idea being that any void spaces sealed in the epoxy (like the insides of the plastic shells) will contain air that will inevitably have some moisture content.  I keep the humidity low in my basement, but at a sufficiently low temperature some water might condense.  Plus this will let the flash chips bond directly to the epoxy, helping to wick heat away.</p>
<p>I used two different types of potting compound.  The yellow coffin will use a clear compound to allow us to see the Pi as it sits in stasis, LEDs blinking away.  More importantly it might help with fault analysis without the need to remove the solidified epoxy, when the device inevitably fails.  (Though hopefully later rather than sooner.)</p>
<p>The grey coffin will receive a black, extremely opaque formulation.  But that formulation has the benefit of being thermally conductive, which should (and as we’ll find out, indeed does) keep the Pi running cooler.  The downside is that it’s going to be quite the forensic archeology project to reveal any physical faults.  But I couldn’t find a compound that was both clear and thermally conductive.  (Not saying it doesn’t exist, just that this is what I ended up buying.)</p>
<p>I don’t know how necessary this was (and it probably wasn’t), but I filled all of the unused connectors with more dielectric grease prior to potting.  Realistically, if water infiltrates the potting compound then it’ll effect all parts of the PCB, not just the connectors.</p>
<p>The connectors that were used also got greased up, but those I justify by the thought that water might wick through the cables into the connectors specifically.  Hopefully the grease will help to ameliorate that problem.</p>
<p>To save some space and reduce the amount of potting compound needed, I filled some unused volume in the grey coffin with open cell foam.  Because the foam would absorb the epoxy, I covered it in electrical tape (the best kind of tape) before pouring.</p>
<p>If you ask me now which of the two concoctions was better to work with, I’d say the clear compound without a doubt.  I’m not sure if it was due to the thermally conductive mix or some other difference, but the black epoxy was ridiculously viscous to the point that it was difficult to stir and ultimately pour.  It also smelled horrible.</p>
<p>But time will tell if it’s the better choice, as it may hold up better in its earthen environs.</p>
<p>In any case, it wasn’t untenable, and I successfully mixed up my first small batch.  The directions said to let it stand for 15 minutes after mixing to de-air.  I’d actually purchased a cheap vacuum pump and vessel to remove bubbles in a faster and more thorough manner, but due to personal reasons (the pump being loud as shit and my wife being asleep) I decided to de-air it au natural.</p>
<p>With that first batch, I coated the bottom of the grey case with a few millimeters of the stuff.  Then with the remnants, I thoroughly coated the back of the Pi.  I think this is a necessary step to ensure that the PCB gets full coverage and adhesion on the underside, as just plonking it into a pool of the stuff might leave bubbles against its underside.</p>
<p>I then pressed it down a bit and brought it up slightly, so that the board wasn’t in contact with the bottom of the case.  I had considered using stand-offs to keep an adequate layer of potting material between the inside of the case and the bottom of the PCB, but that would have created an interruption in pottedness at those points so I opted to finesse the board a few millimeters above the bottom of the case and leave it at that.</p>
<p>The potting compound was so thick and tarry that I don’t believe the Pi was able to sink into it.</p>
<p>Then I created a batch using the remaining compound, which was quite a lot.  That harkened back to the part of the instructions that said the 2 hour working life specified was based upon a batch size of 100 grams, and that the working time would diminish in inverse proportion to the batch size.  I measured it at about 475 grams (minus the cup) and &#8212; worried about de-airing it for too long &#8212; I poured it pretty soon after.  I figured it could de-air in the coffin, and the bubbles would rise away from the PCB anyhow.</p>
<p>Then it was the turn of the yellow coffin, and I probably needlessly injected dielectric grease into all of its unused connectors.  The potting compound probably would have flowed in and filled them completely regardless, but I felt that it was the best move in case moist-ish air did get trapped within them.</p>
<p>The yellow coffin – let’s call it the “yoffin”.. wait, no, that’s awful – also wouldn’t have any of the fancy dongle trappings of the grey coffin (Goffin?).   It would just be a straight run of CAT 6 through a hole in the case and then into the Pi.  </p>
<p>I measured the cable and made the hole as tight a fit as possible, but of course that’s not gonna stop water from infiltrating around it.  But the hole will be below the level of the potting compound, and besides, I stripped back far more of the cable sheath than necessary.  This way the individual conductors will become encapsulated, providing a break for any water wicking inside or around the cable.</p>
<p>The Pi also got one final boot-up before potting, just as did the one in the Goffin.  Everything checked out, so it was time to pot.</p>
<p>The clear fluid, being less viscous, de-aired quite effectively just sitting on the table.  It was a mix of about 375 milliliters total, which was the entire contents of the two containers poured simultaneously.</p>
<p>I poured it over the back of the PCB first to ensure coverage, and then flipped it over and continued pouring to get it completely covered.  That 375 mL was more than enough to fill the Pelican 1040 case.  Incidentally, the hard plastic exterior of the case is made entirely of clear plastic.  The yellow insert is a flexible rubber-like material which to me didn’t seem desirable as it would thermally insulate the potting compound from the earth, to which the epoxy probably wouldn’t adhere all that well.  But the lip of the yellow insert makes up the seal for the box, so without it water would just be able to flow right in.   In retrospect I would have carefully cut off that lip, used it as a seal, and disposed of the rest of it.  That would have also allowed us to view the underside of the PCB during failure analysis.</p>
<p>It was also a little tricky getting the CAT 6 conductors bent into a good way to keep the board neatly positioned.  But in the end, the Pi was fully submerged, and the cable sheath was far away from it.</p>
<p>It was then time to cure the epoxy, and initially I was just going to let it sit out at room temperature which could have taken a maximum of 96 hours.  My hope was that the relatively large pours would have resulted in faster curing, but after about 24 hours both mixes were still soft.</p>
<p>The instructions mentioned heat curing, but at relatively low temperatures.  The lowest my oven would go was 250 degrees Fahrenheit, but I got impatient and decided to use the oven door to regulate the temperature.  A few comically overblown multimeters with really horrible thermocouples were deployed to measure the temperature of the coffins at various positions.  (The idea being that the multimeters could graph the temperature, but that was both not helpful and the displays too small to monitor while standing near the camera.)</p>
<p>After roughly half an hour each at anywhere between 100 and 175 degrees Fahrenheit, the potting compound seemed to have solidified – on the surface, anyway.</p>
<p>If you saw my previous video on the Qilipsu outdoor enclosure, you’ll know what this box is.  It spent a little over 15 months attached to the outside of my house, braving a freezing New York winter and a couple of hot and occasionally stormy summers to test the enclosure.</p>
<p>Inside is an old Celeron mini-ITX motherboard which was incredibly shitty and slow, but which hosted a web server that announced to the world whether or not the system was up and running.  Happily it never went down due to any kind of failure of the computer nor the enclosure, but I figured it was time to resurrect the project in a new and improved form.</p>
<p>The case received a new single board computer in an industrial style chassis – the branding on it is “V-N-O-P-N” (VN Open?) – which has a newer Celeron J4125 quad-core CPU, 8GB RAM, a 128GB mSATA SSD, as well as onboard WLAN and four ethernet ports.</p>
<p>That was paired with a TRENDnet TPE-S44 8-port ethernet switch, 4 of which are PoE-cabable.</p>
<p>The switch uses a 48V power supply that match the PoE voltage, whereas the computer uses a standard 12V PSU.  I would have preferred one supply split on the DC side to both devices, but I instead mounted both of them in the enclosure.</p>
<p>The components were all attached to the backing plate of the case using industrial strength Velcro – which has a really sturdy adhesive backing and strong hook and loop connections – but the real holding  is being done by zip ties.</p>
<p>The holes in the Qilipsu backing plate are only large enough to pass very small and weak ties, so I marked and drilled some of them out for these larger colorful ones.</p>
<p>The purpose of this wall-mounted computer would now be to act as a router to connect the Coffins to the outside world.   Ethernet would be passed to the outside using the same connector boots as I installed in the grey coffin.</p>
<p>Those connectors have ethernet jacks concealed within, and cleverly the cinch nut is large enough to fit an RJ-45 connector, and the bushing is split to go over a cable.  It means you can pre-terminate cables before connecting them, which is really the main advantage of using a connector like this rather than running the cable through the side of the case and then terminating it, like I did with the yellow coffin.</p>
<p>To distribute power to the two PSUs, rather than put a power strip or multi-tap NEMA socket in the box, I used a Y-splitter cable that I had lying around, in addition to an IEC C-14 to NEMA 5-15 adapter cable so that I didn’t have to replace the existing inlet power cord.</p>
<p>The mini-computer booted up just fine, and I set about configuring it as a web server and router.</p>
<p>If you’re watching this years from now, I may have taken this project offline.  But you can go to outdoor.s.co.tt to see if this system is still up and running.</p>
<p>Before burying the coffins, I tested out the whole Data Grave setup on the bench one last time.  I mean, if it didn’t work it was a little late to fix any hardware issues with the potted pies, but aside from a minor DHCP issue the whole thing came together without a hitch.</p>
<p>(The Pies use DHCP to obtain their IP addresses, making it easy if I need to connect them to a different net for debugging.  The router has static DHCPD entries based upon MAC addresses to ensure that the coffins’ IPs remain consistent between reboots.)</p>
<p>The entombment had far less fanfare than a normal burial.</p>
<p>For this test setup, I located the hole right next to my house.  If one were to do this for real as a serious backup solution, I’d dig the hole farther away from anything flammable or destructible.</p>
<p>Obviously there’s grass here that I didn’t want to ruin, so I flayed the top soil as you’d do when cutting sod.   Only without a real sod cutter, so it was a bit more awkward than it could have been, but the result wasn’t bad.</p>
<p>Oh yeah, and when you’re digging a hole in this sort of circumstance (to bury electronics in?) it’s a good idea to put a tarp down to throw the fill onto.  Makes cleanup much easier, and doesn’t ruin the grass.</p>
<p>I only dug down about two feet, which is above the frost line here in New York – the depth to which the ground is likely to freeze in winter.   That means the coffins will probably be encased in icy soil for some of the winter, and they and their cables will be subject to strain as the ground heaves and settles.  So it’s probably wiser to look up the frost line depth for your area (if the ground even freezes where you are) and bury deeper than that.   Here, that’s anywhere from 30 to 50 inches, depending upon which website I’d want to believe.</p>
<p>In my defense, my back is shit and I hit the much harder sandy subsoil, so I gave up at two feet.  But hopefully the subsoil should drain reasonably well and if it doesn’t we’ll all find out together when I pull waterlogged Raspberry Pies out of the ground that I put there for no reason.</p>
<p>When you’re burying your computers, it’s probably a good idea to put a layer of stone underneath them to facilitate drainage.  The stone should be surrounded by landscape fabric to prevent earth from infiltrating and filling in the spaces between the stones.  Here I used landscape fabric, but it probably wasn’t super necessary.  I figured it would at least provide some protection to the coffins.</p>
<p>There was one last step to preparing the grey coffin.  Because the yellow one had its ethernet cable subsumed by the epoxy, I had to cut a very long tail for it of about 50 ft which was way too long for safety.  To make more efficient use of the somewhat expensive outdoor/direct burial CAT6, for the grey guy I ran the tail from the spool directly into it so that I could measure it out accurately by laying the cable.</p>
<p>The cable and grommet got plenty of silicone grease, as did the unused connectors inside the case.</p>
<p>Then I sealed them shut using a couple of plastic and one metal zip tie each.  This was to prevent the latches from popping open either during the burial process, or from the stress of the freeze/thaw cycles.</p>
<p>With the Qilipsu re-attached to the wall and powered up, all that remained was the funeral.</p>
<p>I put a loop of extra cable from each coffin underneath them, to keep strain off of the inlets to the boxes during burial and freeze/thaw.</p>
<p>One last test was in order before shoveling the soil in, so it’s back to the router to terminate and connect the other ends of the ethernet cables.  I decided to give the weatherproof jacks a proper test by not putting any dielectric grease on those connectors.  Of course, they’re at the bottom of the box so water won’t rampantly flow upwards.  But you always have to keep capillary action in mind, where water can indeed work its way into an enclosure against gravity.</p>
<p>Anyhow, the coffins both powered up fine, and so the entombment began as daylight faded.</p>
<p>Uh, apologies for the absolutely crap framing here where I cut off the edge of the hole, but you get the idea.</p>
<p>I tamped down the first layer of dirt by stomping on it to ensure that the coffins were firmly in place.  In retrospect I should have stomped another layer because the ground has settled a bit in that spot.  But if you’re doing this, then you should leave some loose dirt on top before replacing the piece of sod, but you can definitely compact more than I did.</p>
<p>Speaking of which, the sod was a little unwieldy in that piece as I’d taken a ton of soil with it.  So I cut it in half before replacing it.  Absolutely no harm done if that happens to you.</p>
<p>Last but not least, I watered the area.  This wasn’t to test the water tightness of the coffins or anything, but it’s always advisable when laying sod.  Regular watering will obviously keep the grass alive, but also promote root growth into the soil beneath, bonding it all back together.</p>
<p>And such is the story of the Data Grave.</p>
<p>I’ve been pretty busy lately, so I’m writing this script almost 2 weeks afterwards.  In the interim there have been a couple of good rainfalls, and temperatures have been creeping down towards freezing.  So far both underground Raspberry Pies are fully functional, and responding to requests.</p>
<p>When you go to outdoor.s.co.tt, you’ll see a link to each coffin at the bottom.  The web pages are really nothing special to look at, but they are served from their subterranean location.</p>
<p>Being as this is a proof of concept, the story isn’t over.  In about a year – or when both Pies fail – I’ll dig the data coffins up to see how they fared.  So if you’re watching this video in 2024 or later, check my channel for the conclusion.</p>
<p>At some point sooner than that, I’m going to be posting a video about another coffin that’ll be serviceable rather than potted.  So if you’re watching this after the second quarter of 2023, check out that video, too.</p>
]]></content:encoded>
			<wfw:commentRss>http://s.co.tt/2022/12/07/data-grave-underground-backup-servers/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>How and Why to Backup your Data</title>
		<link>http://s.co.tt/2021/12/01/how-and-why-to-backup-your-data/</link>
		<comments>http://s.co.tt/2021/12/01/how-and-why-to-backup-your-data/#comments</comments>
		<pubDate>Thu, 02 Dec 2021 03:14:47 +0000</pubDate>
		<dc:creator><![CDATA[Scott]]></dc:creator>
				<category><![CDATA[Computers]]></category>
		<category><![CDATA[Rants]]></category>
		<category><![CDATA[Videos]]></category>
		<category><![CDATA[backups]]></category>
		<category><![CDATA[computer]]></category>
		<category><![CDATA[rant]]></category>

		<guid isPermaLink="false">http://s.co.tt/?p=2214</guid>
		<description><![CDATA[This is another backing-up-your-data rant, but even though I&#8217;m posting this second it technically comes first in the order of shooting. And I think it&#8217;s a bit more informative and organized. So if you only watch one rant about backups this year, make it this one. Viva 2018.]]></description>
				<content:encoded><![CDATA[<p><center><iframe width="640" height="360" src="https://www.youtube.com/embed/uECeXAyja-0" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></center></p>
<p>This is another backing-up-your-data rant, but even though I&#8217;m posting this second it technically comes first in the order of shooting.  And I think it&#8217;s a bit more informative and organized.  So if you only watch one rant about backups this year, make it this one.</p>
<p>Viva 2018.</p>
]]></content:encoded>
			<wfw:commentRss>http://s.co.tt/2021/12/01/how-and-why-to-backup-your-data/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Creating a Backup Server from a HP DL380 G8 (StoreOnce 2900) &#124; Hardware to Software RAID Conversion &#124; Rambling</title>
		<link>http://s.co.tt/2021/11/27/creating-a-backup-server-from-a-hp-dl380-g8-storeonce-2900-hardware-to-software-raid-conversion-rambling/</link>
		<comments>http://s.co.tt/2021/11/27/creating-a-backup-server-from-a-hp-dl380-g8-storeonce-2900-hardware-to-software-raid-conversion-rambling/#comments</comments>
		<pubDate>Sat, 27 Nov 2021 19:44:23 +0000</pubDate>
		<dc:creator><![CDATA[Scott]]></dc:creator>
				<category><![CDATA[Computers]]></category>
		<category><![CDATA[Rants]]></category>
		<category><![CDATA[Videos]]></category>
		<category><![CDATA[backups]]></category>
		<category><![CDATA[howto]]></category>
		<category><![CDATA[RAID]]></category>
		<category><![CDATA[rant]]></category>
		<category><![CDATA[server]]></category>

		<guid isPermaLink="false">http://s.co.tt/?p=2202</guid>
		<description><![CDATA[I converted an older HP DL380 Gen8 (aka a StoreOnce 2900) from using a hard RAID controller to an HBA for software RAID. The conversion is simple, but the video is long af because I spend a lot of time discussing the &#8220;why&#8221; more than the &#8220;how&#8221;. In this excerpt from the below video, I talk about the total cost of ownership of RAID arrays. This describes why I created the RAID HDD TCO Calculator which helps you figure out the total cost of ownership of a RAID array, inclusive of stuff like electrical and cooling costs.]]></description>
				<content:encoded><![CDATA[<p><center><iframe width="640" height="360" src="https://www.youtube.com/embed/58nsCguqjRs" title="Creating a Backup Server from a HP DL380 G8 (StoreOnce 2900) | HW to SW RAID Conversion | Rambling" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></center></p>
<p>I converted an older HP DL380 Gen8 (aka a StoreOnce 2900) from using a hard RAID controller to an HBA for software RAID.  The conversion is simple, but the video is long af because I spend a lot of time discussing the &#8220;why&#8221; more than the &#8220;how&#8221;.</p>
<p><center><iframe width="640" height="360" src="https://www.youtube.com/embed/FFiX3agOUk0" title="RAID TCO Rant (or Why I Used 4TB Drives in a Backup Server)" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe></center></p>
<p>In this excerpt from the below video, I talk about the total cost of ownership of RAID arrays.</p>
<p>This describes why I created the <a href="/2019/04/05/hard-drive-raid-tco-calculator-total-cost-of-ownership/">RAID HDD TCO Calculator</a> which helps you figure out the total cost of ownership of a RAID array, inclusive of stuff like electrical and cooling costs.</p>
]]></content:encoded>
			<wfw:commentRss>http://s.co.tt/2021/11/27/creating-a-backup-server-from-a-hp-dl380-g8-storeonce-2900-hardware-to-software-raid-conversion-rambling/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Clone a Dynamic Disk to a New SSD in Windows 10</title>
		<link>http://s.co.tt/2019/11/08/clone-a-dynamic-disk-to-a-new-ssd-in-windows-10/</link>
		<comments>http://s.co.tt/2019/11/08/clone-a-dynamic-disk-to-a-new-ssd-in-windows-10/#comments</comments>
		<pubDate>Fri, 08 Nov 2019 05:04:28 +0000</pubDate>
		<dc:creator><![CDATA[Scott]]></dc:creator>
				<category><![CDATA[Computers]]></category>
		<category><![CDATA[backups]]></category>
		<category><![CDATA[clonezilla]]></category>
		<category><![CDATA[computer]]></category>
		<category><![CDATA[disk clone]]></category>
		<category><![CDATA[howto]]></category>
		<category><![CDATA[Windows]]></category>
		<category><![CDATA[Windows 10]]></category>

		<guid isPermaLink="false">http://s.co.tt/?p=2058</guid>
		<description><![CDATA[This is partially just for my own reference, so I don&#8217;t have to go down this rabbit hole again. (But I hope it helps you, too!) The Situation I wanted to upgrade the LITE-ON 256GB SSD in my trusty ol&#8217; Lenovo X1 Carbon laptop to a snazzy new Samsung 960 EVO 2TB drive. I have a version of Acronis that came with a Crucial (or Kingston?) SSD, which has worked great in the past. The problem? There was a system reserved partition at the very end of the disk, and Acronis therefore would not proportionally scale the OS partition to fill the disk; It would only scale that system reserved partition. In a moment of errant stupidity, I said, &#8220;Ah-hah! … <a class="continue-reading-link" href="http://s.co.tt/2019/11/08/clone-a-dynamic-disk-to-a-new-ssd-in-windows-10/"> Continue reading</a>]]></description>
				<content:encoded><![CDATA[<p><img src="http://s.co.tt/wp-content/uploads/2019/11/Clonezilla_Vomits_Feces_onto_Other_Tools_Because_thats_Classy-740x416.jpg" alt="Clonezilla Vomits Feces onto Other Tools Because that&#039;s Classy" width="740" height="416" class="aligncenter size-large wp-image-2068" /></p>
<p>This is partially just for my own reference, so I don&#8217;t have to go down this rabbit hole again.  (But I hope it helps you, too!)</p>
<h2>The Situation</h2>
<p>I wanted to upgrade the LITE-ON 256GB SSD in my trusty ol&#8217; Lenovo X1 Carbon laptop to a snazzy new Samsung 960 EVO 2TB drive.  I have a version of Acronis that came with a Crucial (or Kingston?) SSD, which has worked great in the past.  The problem?  </p>
<p>There was a system reserved partition at the very end of the disk, and Acronis therefore would not proportionally scale the OS partition to fill the disk;  It would only scale that system reserved partition.</p>
<p>In a moment of errant stupidity, I said, &#8220;Ah-hah!  If I make the drive a <strong>dynamic disk</strong>, that will allow me to rearrange the partitions!&#8221;  (It most certainly will <strong>not</strong>.)</p>
<p>So I made the main boot drive a <strong>dynamic disk</strong>.  I didn&#8217;t think for a second that would render the drive un-clone-able by most any software.  Yet, that&#8217;s what it did.</p>
<p>Hence my odyssey began&#8230;</p>
<p>I tried EASEUS, AOMEI, Acronis, Clonezilla, creating a RAID1 array, Windows Image Backup and even <code>dd</code>.</p>
<p>The free versions of the paid tools (the former three) would not clone dynamic disks, though some claimed that the paid/pro version would.  However I wasn&#8217;t going to shell out between $49 and $99 to do something that should be free.</p>
<p>Clonezilla had no problem actually cloning the drive, but a non-proportional clone resulted in the remaining ~1.75TB being unusable.  I couldn&#8217;t create a partition on it using Windows (either in the GUI or using <code>diskpart</code>), and so I tried booting into <code>gparted</code> and creating a new NTFS partition at the end of the disk.  That <em>ostensibly</em> worked fine, but then I got the dreaded <code>INACCESSIBLE_BOOT_DEVICE</code> error.</p>
<p>The same error resulted when doing a proportional clone in Clonezilla.</p>
<p>You can&#8217;t create a RAID array using a USB drive.. Or at least I couldn&#8217;t.  (The new SSD was housed temporarily in a USB enclosure.)</p>
<p>Windows Image Backup worked to do the actual.. ahem.. <em>backup</em>, but when I went to restore I got some obscure error about the volume shadow copy service (?????).</p>
<p>And <code>dd</code> just caused the thing not to boot.</p>
<h2>Requisite Disclaimer</h2>
<p>If you make one minor mistake while doing the below, you could wipe out all the data on your original drive.</p>
<p>Heck, even if you do everything right, your original drive might decide to poop the bed.</p>
<p>So before doing any of this <strong>back up your important files</strong> to a flash drive, another SSD, a hard drive, a cloud, or 4,000,000,000,000 punch cards.  Show the hex representations of each file to an android so that he/she/it can later recreate them via a keyboard with their hands but a blur.  </p>
<p><strong>Literally anything is better</strong> than just assuming you&#8217;ll pull this off without issue.</p>
<h2>The Solution!</h2>
<p>I <strong>put the new 2TB SSD in the laptop</strong> and wiped the partitions using <code>gparted</code>.  (You won&#8217;t have to do this if you&#8217;re starting from scratch, as there won&#8217;t be any partitions.)</p>
<p>Then I <strong>installed a fresh copy of Windows 10</strong> using the default settings.</p>
<p>This accomplished two important things:</p>
<ul>
<li>It created a 1.86TB (usable) partition for the OS</li>
<li>It rendered the SSD bootable <strong>to that partition</strong></li>
</ul>
<p>When you&#8217;re installing this fresh copy of Windows, <strong>leave the network disconnected</strong> because you don&#8217;t want to get snagged into doing lengthy updates for no reason.</p>
<p>Furthermore, <strong>don&#8217;t bother changing any settings or doing anything other than the default procedure</strong>.  Because the next step will wipe all of that out.</p>
<h2>The Next Step</h2>
<p>Connect the original SSD via USB (or via whatever).</p>
<p>Fire up Clonezilla.</p>
<p>Set it to <strong>expert mode</strong>.</p>
<p>Don&#8217;t let expert mode intimidate you.  Most everything is going to stay at the default.</p>
<p>Select a <strong>local partition to local partition</strong> clone.</p>
<p>Choose the OS partition on your original drive as the source.</p>
<p>Choose the OS partition on the new drive as the target.</p>
<p><strong>Check and check again and again that you have the right source and target selected.</strong>  If you get it the wrong way around, you&#8217;ll end up with your virgin Windows install overwriting your original OS and related files.</p>
<p>When the option comes up, <strong>choose to clone the partition proportionally</strong> so that it fills the disk.  (I&#8217;m assuming that you&#8217;re going up in size to a larger drive like I was.  But either way it should work even if you&#8217;re using a drive of the same size.  Smaller will not work.)</p>
<p><strong>Commence the clonein&#8217;.</strong></p>
<p>Once complete, disconnect the original SSD that&#8217;s connected via USB, and remove the Clonezilla flash drive.</p>
<p>Reboot, and <strong>you should now have a functional copy of your old system drive.</strong></p>
<p>The new drive will also be marked as a <strong>basic disk</strong> and can therefore be cloned by most any software until your heart&#8217;s content.</p>
]]></content:encoded>
			<wfw:commentRss>http://s.co.tt/2019/11/08/clone-a-dynamic-disk-to-a-new-ssd-in-windows-10/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
