<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ScottDotDot </title>
	<atom:link href="http://s.co.tt/tag/centos/feed/" rel="self" type="application/rss+xml" />
	<link>http://s.co.tt</link>
	<description>Babblings of a computer curmudgeon.</description>
	<lastBuildDate>Mon, 26 Jan 2026 16:08:52 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.1</generator>
	<item>
		<title>Bash &#8220;Shellshock&#8221; Bug &#8211; Quick Vulnerability Test and Patch</title>
		<link>http://s.co.tt/2014/09/25/bash-shellshock-bug-quick-vulnerability-test-and-patch/</link>
		<comments>http://s.co.tt/2014/09/25/bash-shellshock-bug-quick-vulnerability-test-and-patch/#comments</comments>
		<pubDate>Thu, 25 Sep 2014 15:56:10 +0000</pubDate>
		<dc:creator><![CDATA[Scott]]></dc:creator>
				<category><![CDATA[Computers]]></category>
		<category><![CDATA[CentOS]]></category>
		<category><![CDATA[computer]]></category>

		<guid isPermaLink="false">http://s.co.tt/blog/?p=925</guid>
		<description><![CDATA[This is not meant as a comprehensive guide to the Bash &#8220;shell shock&#8221; bug, but as a quick reference to test and patch for the vulnerability. First, test your version of Bash with this line: env x='() { :;}; echo vulnerable' bash -c "echo this is a test" If you get the world &#8220;vulnerable&#8221; in your output then you need to update Bash: vulnerable this is a test If your output contains errors followed by &#8220;this is a test&#8221;, then your Bash version is not vulnerable: bash: warning: x: ignoring function definition attempt bash: error importing function definition for `x' this is a test Check to see if your distribution has an updated/fixed version of Bash available in its repository. … <a class="continue-reading-link" href="http://s.co.tt/2014/09/25/bash-shellshock-bug-quick-vulnerability-test-and-patch/"> Continue reading</a>]]></description>
				<content:encoded><![CDATA[<p>This is not meant as a comprehensive guide to the Bash &#8220;shell shock&#8221; bug, but as a quick reference to test and patch for the vulnerability.</p>
<p>First, test your version of Bash with this line:</p>
<pre><code>env x='() { :;}; echo vulnerable' bash -c "echo this is a test"</code></pre>
<p>If you get the world &#8220;vulnerable&#8221; in your output then <strong>you need to update Bash</strong>:</p>
<pre><code>vulnerable
this is a test</code></pre>
<p>If your output contains errors followed by &#8220;this is a test&#8221;, then your Bash version is <strong>not vulnerable</strong>:</p>
<pre><code>bash: warning: x: ignoring function definition attempt
bash: error importing function definition for `x'
this is a test</code></pre>
<p>Check to see if your distribution has an updated/fixed version of Bash available in its repository.</p>
<p>I&#8217;m a heavy CentOS user, and I can verify that there is a fixed version available for both CentOS 5 and 6.</p>
<p>This is all you technically need:</p>
<pre><code>yum update bash</code></pre>
<p>It&#8217;s always recommended to <strong>fully update your system</strong>, but if you&#8217;re purposefully running &#8220;legacy&#8221; or deprecated versions of software then it can be advantageous (or necessary) to manually update Bash at minimum right now.</p>
<p><a href="http://s.co.tt/blog/wp-content/uploads/2014/09/eggshell.jpg"><img src="http://s.co.tt/blog/wp-content/uploads/2014/09/eggshell-300x199.jpg" alt="Shellshock" title="Shellshock" width="300" height="199" class="aligncenter size-medium wp-image-927" /></a></p>
]]></content:encoded>
			<wfw:commentRss>http://s.co.tt/2014/09/25/bash-shellshock-bug-quick-vulnerability-test-and-patch/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Fix for: Keepalived router enters fault state on link down</title>
		<link>http://s.co.tt/2014/06/06/fix-for-keepalived-router-enters-fault-state-on-link-down/</link>
		<comments>http://s.co.tt/2014/06/06/fix-for-keepalived-router-enters-fault-state-on-link-down/#comments</comments>
		<pubDate>Fri, 06 Jun 2014 19:01:49 +0000</pubDate>
		<dc:creator><![CDATA[Scott]]></dc:creator>
				<category><![CDATA[Computers]]></category>
		<category><![CDATA[CentOS]]></category>
		<category><![CDATA[keepalived]]></category>
		<category><![CDATA[routers]]></category>

		<guid isPermaLink="false">http://s.co.tt/blog/?p=540</guid>
		<description><![CDATA[TL;DR: This is the configuration option you want: dont_track_primary At work and at home I have pairs of redundant &#8220;core&#8221; routers in an active-passive (or master-backup as you like) configuration. They consist of commodity hardware, a few 4-port gigabit NICs, and CentOS. All of these machines had been running flawlessly for anywhere from two to six years (as they were put into service or upgraded). That is until yesterday when my primary router at home had an SSD failure which completely stopped it in its tracks. The backup router took over, and in less than a second traffic was being routed. All of my point-to-point VPNs reconnected within about 20 seconds. In other words, it worked exactly as it should. … <a class="continue-reading-link" href="http://s.co.tt/2014/06/06/fix-for-keepalived-router-enters-fault-state-on-link-down/"> Continue reading</a>]]></description>
				<content:encoded><![CDATA[<p><strong>TL;DR:</strong>  This is the configuration option you want:  <strong>dont_track_primary</strong></p>
<p>At work and at home I have pairs of redundant &#8220;core&#8221; routers in an active-passive (or master-backup as you like) configuration.  They consist of commodity hardware, a few 4-port gigabit NICs, and CentOS.  All of these machines had been running flawlessly for anywhere from two to six years (as they were put into service or upgraded).</p>
<p>That is until yesterday when my primary router at home had an SSD failure which completely stopped it in its tracks.  The backup router took over, and in less than a second traffic was being routed.  All of my point-to-point VPNs reconnected within about 20 seconds.  In other words, it worked exactly as it should.</p>
<p>Until I turned off power to the broken router.  Then everything stopped.</p>
<p>I had made a minor change to my router pair a few months ago, and didn&#8217;t think anything of it.  Instead of running VRRP traffic through the switch, I had dedicated a NIC port on each machine and connected them directly using a crossover cable.  I had only tested by bringing the primary router down gracefully, and did not pull the plug.</p>
<p>When the plug was pulled on the broken router, the now-master saw the link go down on the VRRP port and keepalived went into the FAULT state.  It gave up its VIPs and basically stopped keeping anything alive.</p>
<p>That behavior can make sense in certain scenarios.  For example, if just the NIC port used for VRRP went down on the master router, I wouldn&#8217;t want the backup <strong>also</strong> taking the VIPs (and certain routes, etc.)  If I had VRRP going through one switch and production traffic going through another, I wouldn&#8217;t want a failure on the less important switch to again cause VIP conflicts.</p>
<p>In my case, I find it <strong>much</strong> (much, much, much) more likely that the link having gone down will mean that one of the machines has died completely.  In my experience power supplies and HDDs (or SSDs) are far more likely to fail than a NIC or NIC port.  It&#8217;s not to say that the latter is impossible, but rather that I have to plan for the most likely worst-case scenario.</p>
<p>All that being said, there is one setting for your keepalived.conf to obviate this issue:  <strong>dont_track_primary</strong></p>
<p>That&#8217;s it.  It doesn&#8217;t have options or qualifiers.  From the <a href="http://manpages.ubuntu.com/manpages/hardy/man5/keepalived.conf.5.html" target="_blank">man page</a>:</p>
<p><code># Ignore VRRP interface faults (default unset)<br />
dont_track_primary</code></p>
<p>From the <a href="http://www.keepalived.org/changelog.html">keepalived changelog</a>:</p>
<p><code>VRRP : Chris Caputo added "dont_track_primary"<br />
  vrrp_instance keyword which tells keepalived to ignore VRRP<br />
  interface faults. Can be useful on setup where two routers<br />
  are connected directly to each other on the interface used<br />
  for VRRP. Without this feature the link down caused<br />
  by one router crashing would also inspire the other router to lose<br />
 (or not gain) MASTER state, since it was also tracking link status.</code></p>
<p>Perfect, right?</p>
<p>Here&#8217;s my keepalive configuration that&#8217;s been sanitized and edited for brevity:</p>
<pre><code>global_defs {
   notification_email {
     <em>me@mydomain.corn</em>
   }
   notification_email_from rtr-core02@int.<em>meagain</em>.net
   smtp_server 10.80.1.41
   smtp_connect_timeout 30
   router_id RTR-CORE-A
}
vrrp_instance VI_0 {
    state BACKUP
    interface p4p1
    smtp_alert
    virtual_router_id 50
    priority 50
    advert_int 1
    dont_track_primary
    notify_master /etc/keepalived/promotemaster
    notify_backup /etc/keepalived/promotebackup
    authentication {
        auth_type PASS
        auth_pass <em>sanitizedpassword</em>
    }
    virtual_ipaddress {
        192.168.1.1/24 brd 192.168.1.255 dev p3p1 label p3p1:100
        192.168.1.2/24 brd 192.168.1.255 dev p3p1 label p3p1:101
        10.1.1.1/24 brd 10.1.1.255 dev p3p2 label p3p2:100
        10.1.1.2/24 brd 10.1.1.255 dev p3p2 label p3p2:101
        <em># Many VIPs omitted here for brevity</em>
    }
    virtual_routes {
        158.209.0.99/32 via 78.123.265.1 dev p1p1 table main
        0.0.0.0/0 via 91.59.24.131 dev p1p2 table 50
        193.266.0.0/16 via 91.59.24.131 dev p1p2 table main
        <em># Many routes omitted here for brevity.  IPs are sanitized/randomized</em>
    }
}</code></pre>
<p>I&#8217;m hoping that I put enough keywords in this article so that you found it easily.  The whole point of this post is to counter the drought of discussion on this topic.</p>
]]></content:encoded>
			<wfw:commentRss>http://s.co.tt/2014/06/06/fix-for-keepalived-router-enters-fault-state-on-link-down/feed/</wfw:commentRss>
		<slash:comments>3</slash:comments>
		</item>
		<item>
		<title>SAN with Linux Cluster and CLVM: Is it Necessary?</title>
		<link>http://s.co.tt/2013/09/04/san-with-linux-cluster-and-clvm-is-it-necessary/</link>
		<comments>http://s.co.tt/2013/09/04/san-with-linux-cluster-and-clvm-is-it-necessary/#comments</comments>
		<pubDate>Wed, 04 Sep 2013 16:45:44 +0000</pubDate>
		<dc:creator><![CDATA[Scott]]></dc:creator>
				<category><![CDATA[Computers]]></category>
		<category><![CDATA[CentOS]]></category>
		<category><![CDATA[clustering]]></category>
		<category><![CDATA[computer]]></category>
		<category><![CDATA[san switch]]></category>
		<category><![CDATA[Virtualization]]></category>

		<guid isPermaLink="false">http://s.co.tt/blog/?p=522</guid>
		<description><![CDATA[To answer the title of this post in one word: No. But as with all things computer related, that &#8220;no&#8221; needs to be followed by the caveat: &#8220;Well, it depends upon your needs.&#8221; From what I&#8217;ve seen, Linux clustering was designed primarily for high-availability services, with only a secondary effort to share disk resources across nodes. I have tried &#8212; and would never use in production &#8212; Linux clustering services for a VM host cluster. I know other people have done it and will continue to do it, but a properly configured (and managed) VM cluster does not need true clustering. (Again, &#8220;depending upon your needs&#8221;). Linux clustering requires fencing. (It didn&#8217;t always, but now it does). Fencing is a … <a class="continue-reading-link" href="http://s.co.tt/2013/09/04/san-with-linux-cluster-and-clvm-is-it-necessary/"> Continue reading</a>]]></description>
				<content:encoded><![CDATA[<p>To answer the title of this post in one word:  No.</p>
<p>But as with all things computer related, that &#8220;no&#8221; needs to be followed by the caveat: &#8220;Well, it depends upon your needs.&#8221;</p>
<p>From what I&#8217;ve seen, Linux clustering was designed primarily for high-availability services, with only a secondary effort to share disk resources across nodes.</p>
<p>I have tried &#8212; and would never use in production &#8212; Linux clustering services for a VM host cluster.  I know other people have done it and will continue to do it, but a properly configured (and managed) VM cluster does not need true clustering.  (Again, &#8220;depending upon your needs&#8221;).</p>
<p>Linux clustering requires fencing.  (It didn&#8217;t always, but now it does).  Fencing is a great thing in a homogeneous cluster where every machine is a clone of every other, and the point of the cluster is that it can lose a machine or six and still provide the same service(s).  The purpose of fencing is to &#8220;shoot a bad node in the head&#8221;.  This can either mean power-cycling it with an iLO or PDU, or disconnecting it from shared resources such as a SAN at the switch level.</p>
<p>Fencing is <strong>tremendously undesirable</strong> in a VM host cluster.  If the cluster decides that one of the nodes is bad, it will simply kill it.  In a <strong>hetero</strong>geneous cluster.  Killing potentially tens (or even hundreds) of your VM guests in one stroke.</p>
<p>Of course, in a VM cluster, fencing would still be required if you were using a shared file <strong>system</strong>.  However, LVM2 is another matter.</p>
<p>Another downside about Linux clustering is that to bring a failed cluster back to a consistent state, the <strong>recommended solution</strong> is to reboot all of the machines in the cluster simultaneously.  (I&#8217;ve found that recommendation made by developers in RHEL&#8217;s bug database, amongst other places).  In a production VM cluster, that&#8217;s unacceptable.</p>
<p><strong>Configuration and Management</strong></p>
<p>From a <strong>configuration</strong> standpoint, there&#8217;s nothing special about running non-clustered LVM on a shared disk in a cluster.  All you have to do is run <code>vgcreate</code> on one node using a shared LUN as a physical disk.  Then run <code>vgscan</code> on the other nodes in the cluster and you&#8217;ll see your new volume group.  No fuss, no muss.</p>
<p>From a <strong>management</strong> standpoint, you have to be careful.  Very, very careful.  Writing to the LVM metadata simultaneously from different nodes (such as doing two <code>lvcreate</code>s) probably will result in metadata corruption, which could bork your entire cluster.  It would be disastrous to employ cron jobs on more than one host, for example, that wrote to LVM&#8217;s metadata.</p>
<p>The best practice in this case would be to designate one node as the &#8220;metadata writer&#8221;.  That simply means that you&#8217;d make all changes to LVM metadata from that machine.  On all other cluster nodes, rename the LVM tools (usually <code>/sbin/lvm</code> and <code>/sbin/lvmconf</code>), and put your own script in their place.  The script should output something like, &#8220;Please use the metadata writer node for changes to LVM&#8221;.</p>
<p>In most cases, command-line instances of LVM commands (e.g. <code>vgcreate</code>, <code>lvchange</code>, etc) are just symlinked to <code>lvm</code>.  If you want to be thorough, change the symlinks for read-only operations like <code>vgscan</code>, <code>vgdisplay</code>, <code>lvdisplay</code>, etc. to point to the renamed <code>lvm</code> utility.</p>
<p>Another gotcha is that there&#8217;s nothing to stop you from running two different instances of the same VM using the same logical volume.  Because there&#8217;s no distributed locking, the system will let you do it.  Of course, this is great if you designate the virtual disk as read-only, because you can share things like repositories and application images across as many VMs as you&#8217;d like.  But running two read/write instances will result in data corruption.</p>
<p>You do have recent backups, right?</p>
<p><strong>Be Warned</strong></p>
<p>I use this technique within a few CentOS / Xen clusters that I manage.  The key being that <strong>I</strong> manage them.  Bringing in an outside technician or new employee that doesn&#8217;t know how (and why) things are configured could very easily screw everything up, even with the best of intentions.  (&#8220;Gee, I wonder why he renamed <code>lvm</code> to <code>lvm.forreadonly</code>.  I&#8217;ll just use it anyway.&#8221;)</p>
<p>In fact, I&#8217;d wager that 99% of sysadmins would recommend that you don&#8217;t listen to me.  However, I&#8217;ve been running things this way for over four years now without a hiccup (knock on chassis).  Not having the overhead and headache of running a true cluster has been great.</p>
<p>Even I&#8217;d have to recommend against using this technique in a cluster larger than a handful of nodes.</p>
<p><strong>The Benefit</strong></p>
<p>Your mileage may vary, but my implementation is fairly straightforward:  In my SAN array, I export each set of spindles as a LUN consisting of 100% of their capacity.  The LUN is exposed to all the VM hosts, and I create a single volume group on the LUN-cum-physical disk.  I then create logical volumes within that VG as needed for the VM guests.  A LV is then used as a physical disk asset by a VM guest.  (I know some people stick partitions on top of their LVs, but I don&#8217;t see the benefit to that).</p>
<p>I can do live migrations, and for times where that&#8217;s not necessary or desirable (e.g. a VM with a large memory footprint that is not mission critical), I use shared configuration files.</p>
<p>I store all of the conf files for my virtual machines on a shared LV that&#8217;s mounted as read-only on all VM hosts (dom0s) except for the &#8220;metadata writer&#8221; where it&#8217;s mounted as read/write.  Therefore, when I want to migrate a guest VM from one host to another it&#8217;s just a matter of bringing the VM down, symlinking to the shared conf file on the new host, and removing the symlink from the old host.</p>
<p><strong>In Conclusion</strong></p>
<p>I wanted to share my experiences and techniques not as a HOWTO, but simply as another way of looking at sharing resources between servers.  I&#8217;ve gotten many a raised eyebrow (and worse) from other sysadmins when I&#8217;ve described my setup.  But it does work, and despite all the pitfalls and caveats involved I maintain that it&#8217;s still pretty easy to ruin or disrupt a &#8220;traditional&#8221; cluster.  This just has less overhead, and fewer places to make mistakes (though mistakes can be really, really B-A-D).</p>
]]></content:encoded>
			<wfw:commentRss>http://s.co.tt/2013/09/04/san-with-linux-cluster-and-clvm-is-it-necessary/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
