Keep seeing this annoying message on FreeBSD, even though back on December 20th, 2013....I had set
"security.bsd.unprivileged_mlock=1" in /etc/sysctl.conf to try to finally address this problem.
The default RLIMIT_MEMLOCK resource limit is 64k, which I would think is more than sufficient.
So, it was time to research this problem in more depth.
Found that there's a DEBUG_SECURE_MEMORY define to see how much memory its trying to allocate. Which its trying to allocate some multiple of 16k blocks, which it later refers as pages. Which I seem to recall is Windows?, Solaris is 8k and most other systems are 4k (my FreeBSD system, its 4k). Well, its only trying (and failing) to mlock 16k. So, I tried overriding the constant to 4k. But, this also failed.
I had skimmed the man page, where it says:
Since physical memory is a potentially scarce resource, processes are limited in how much they can lock down. A single process can mlock() the minimum of a system-wide ``wired pages'' limit vm.max_wired and the per-process RLIMIT_MEMLOCK resource limit.
If security.bsd.unprivileged_mlock is set to 0 these calls are only available to the super-user.
Well, on my system vm.max_wired defaults to 1323555 and RLIMIT_MEMLOCK (ulimit -l) is 64.....so limit is 64k, right?
Wrong...delving into the Kernel source...I found that it first checks that the requested amount + the amount it already has doesn't exceed RLIMIT_MEMLOCK, and then that the requested amount + the amount wired system wide (
"vm.stats.vm.v_wire_count") is not greater than
Well, when I looked at
vm.stats.vm.v_wire_count it was 2020311....its already got more than
I feel a PR coming on....
1323555 (which is about 5GB) is said to be 1/3 of some maximum. I have a 16GB system, probably not contiguous...and there's probably some amount reserved....but 2020311 is about 7.7GB.
I did a
"sysctl vm.max_wired=2097152", and it took it (so put that into
/etc/sysctl.conf, too.) and now gnome-keyring-daemon can start without that message.
So, near the end of July, I started investigating (once again) on replacing my HP Photosmart 8450xi (which was now over 8 years old....bought it on June 30th, 2005 - Back from Vacation Tech Buying Spree?...setup on July 9th, 2005 - link
I had started looking some time before this, but was put off for a bit due to my experiences with the Brother DCP-7065DN -- link, since it seemed most of the choices out there were GDI and I'm moving to more and more heavily FreeBSD as my primary operating system.
Especially since it appears that 'box' finally called it quits on December 2nd, before I had started my journey home from Chicago TARDIS that day....and orac is inching close to its end, as the pair of ST2000DL003's which evidently only had 1 year warranties from June/September 2012 started going shortly into the new year. I was trying to use ddrescue to force sector remapping on the first drive, when the other drive has decided to vanish permanently. I had thought it was was DM's that had 1 year and DL's that had 5 years, perhaps I had it backwards....or its a question of when I purchased them, or how they were packaged.
Checking my order history, I purchased one drive on June as a bare drive and later in September as a retail kit. I haven't yet pulled the drives, so I can't look up the serial number for the vanished one, but Seagate's website says the one that is responding is out of warranty. Even if the other drive is still under warranty, not sure I want to deal with getting it exchanged for a refurb to create a solo 2TB drive. Can't think of not wanting raid given what I'll likely use it for. And, not sure I'd buy a different 2TB drive to be its mate (and it won't work with my other 2TB arrays, since its an advanced format 2TB drive...while the lraidz2 pool on zen used legacy format 2TB drives (which limits options of growing it non-destructively.)
Fortunately, I had copied one of the big volumes from it over to zen (along they way it got corrupted, so had been trying to copy it back from zen when the other drive died). And, files of the other volume (my pyTiVo store) should all be in backup, where I don't have space on zen to restore them yet.... I have pyTiVo on zen, but the content under it is different...and larger, so much that it is currently not being backed up. I haven't gotten made much progress on building the second backup server....guess I'll need to look at this sooner than later.
And, now it seems the other 2TB RAID-1 array on orac is dying. I just went ahead and failed the drive that was giving it issues. Not sure what to do with it...suppose I could try ddrescue on it and see what happens. The big volume on it had also been copied over to zen, so guess I'll update my HSTi's to point to zen instead of orac for their content. Another used to be for Time Machine backups, but I had moved that over to zen when I set up the new work laptop to do Time Machine backups on my home network. I was using that space as overflow from pyTiVo. And, another was for backups of various things, which I had stopped adding to as new backups are going to zen now. Its things like regular backups of my websites at dreamhost and 1and1, my router configs, serial console servers, and some other backups. I was also replicating some directories on zen to orac as backup (left over from when zen was a Windows 7 PC....which saved me from losing everything when it scrambled itself.)
But, back to my printer quest.
On my FreeBSD system, my
apache webserver would get angry whenever I update the
php & extensions ports. Requiring a bunch of other operations after the '
Since I've been playing around with CFEngine 3, I had started to add to my "
bundle agent apache", to do more than just promise config files current, process running, reloads, etc.
So, one of the first problems I had run into on FreeBSD, is that there are certain extensions that need to be in order in '
/usr/local/etc/php/extension.ini'. Which is solved by using
Well, fortunately when this script is run it results in a backup file of '
extensions.ini.old' which is the same age or newer than '
CFEngine3 can take care of it this way:
g.apache is "
apache22" currently on FreeBSD, and "
apache2" on Ubuntu. Someday it might become "
apache24" on FreeBSD.
Since I did FreeBSD first, and I'm still working on getting my one of 4 (or less) Ubuntu rolled in, I have:
g.rc_d as "
g.lrc_d as "
/usr/local/etc/rc.d" for FreeBSD. They are both set to "
/etc/init.d" for Ubuntu. I also have a
g.init_d for Ubuntu, but not FreeBSD. Not sure which I'll use where....I suppose if its an OS specific case,
g.init_d would get used and if its not...then which ever one is the correct one for FreeBSD will get used.
Pages: 1· 2
A couple months ago I asked if mosh could be made to work if the mosh-server IP changes when roaming between networks.
Years ago, I used to have routers that did 'loopback', but haven't had ones capable of it for sometime...or so I thought. Though I hadn't really had a major need for it. Except perhaps for
mosh, MObile SHell, is an ssh replacement that supports roaming and intermittent connectivity. Since I do my IRC using irssi in screen, running all the time on a server at home. This makes staying connected to IRC on my laptop much nicer. I can close my laptop, and later open it and it'll still be connected to my screen session.
The problem was when I came home, I'd be unable to recover the connection correctly and the client goes into an unrecoverable state, so that even if I later use my laptop on an outside network the mosh session won't resume.
But, today I opened my laptop (and I just realized that I didn't do what I had intended to do) and I just minimized the window out the of the way...even though it probably wouldn't recover on Monday at work. But, the dock icon showed that something wanted my attention....probably mosh-client giving up? No. Well, my nick had come up a couple of times yesterday, but it shouldn't have known that....but not really thinking, I switch to the channel. And, it does. I switch around and its working. Wait...it shouldn't be though!
So what changed? I do a tcpdump and see that it is connecting to my WAN IP and getting responses from my WAN IP....'loopback' never worked for me though....
Perhaps its 'loopback' of port forwards that has never worked....
I had moved irssi from box to dbox a while back. The router has two port forwards set related to this to box, a single port TCP forward and port range UDP forward.
But, because my other router is running stock firmware, it has a limited number of port forwards...so as I was migrating services to cbox (and using nginx to reverse proxy web services on other systems on my home network, where those that use a webserver are using apache, including local services...such as cacti on cbox and nagios on dbox), I decided that I would just make cbox the DMZ host...start running host based firewalls at home, especially on this host (it also uses an IP alias...kind of like how we do hosts behind the BigIP at work )
So that means no port forward(s) for my dd-wrt router for WAN to dbox....so I guess the NAT allows 'loopback'ng in this case.
Wonder if the same applies to my other router.
The only problem this causes is that I had plans to replace routers. I actually have a new router to replace the current stock router....though I haven't got anything that really needs to speed upgrade to 802.11ac yet in the room where I using wireless bridging. I also had plans to replace my dd-wrt router, which had started getting unreliable which they seem to do after a while....though it seems to have helped after I deleted old traffic data....
There are Unix servers at work that have uptimes in the >1000 days, there are even servers with updates in the >2000 days, in fact there are servers that have now exceeded 2500 days (I'm looking at one with 2562+ days.)
On one hand there are SAs that see this as a badge of honor or something to have had a system stay up this long. OTOH, its a system of great dread.
A while back this system was having problems....its Solaris and somebody had filled up /tmp....fortunately, I was able to clean things up and recover before another SA resorted to hard rebooting it.
The problem with these long running servers, especially in a ever changing, multi-admin shop, is that you can't be sure that the system will come back up correctly after a reboot.
We've lost a few systems at work due to a reboot. Some significant ones as simple as replacing a root disks under vxvm and forgetting to update the sun partition table, or a zpool upgrade and forgetting to reinstall the boot. To more significant ones, where a former SA had temporarily changed the purpose of an existing system all by command line and running out of /tmp...so that after its been up for 3+ years and he's been gone over a year....patching and rebooting makes it disappear.... the hardware that the system was supposed to be on needed repair, but he had never gotten around to it.
It'll be interesting to see what happens should the system ever get rebooted.
So, what brought this post one?
So, what started as take a week to set up a new nagios server at work ended up taking almost a month...because there were many days where I'd only have an hour or less to put some time into the side task. The other stumbling block was I had decided that the new nagios server configuration files would get managed under subversion, instead of RCS as it had been done in the previous two incarnations. New SA's don't seem to understand RCS and that the file is read-only for a reason...and its not to make them use
:w! ... which lately has resulted in a the sudden reappearance of monitors of systems that had been shutdown long ago.
Though now that I think of it, there used to be the documented procedure for editing zone files (back when it was done directly on the master nameserver and version controlled by RCS.) Which as I recall was to perform an
rcsdiff, and then use the appropriate workflow to edit the zone file.
% rcsdiff zonefile if differences % rcs -l zonefile % ci -l zonefile make rude comment that somebody made edits % vi zonefile % ci -u zonefile else % co -l zonefile % vi zonefile % ci -u zonefile fi
But, when I took over managing DNS servers, I switched to having cfengine manage them and the zone files now live under
masterfiles, so version control is now done using subversion. Had started butchering the DNS section in the wiki, probably should see about writing something up on all the not so simple things I've done to DNS since taking it over...like split, stealth, sed processing of master zone for different views, DNSSEC, the incomplete work to allow outside secondary to take over as master should we ever get a DR site, and other gotchas, like consistent naming of slave zone files now that they are binary.
Additionally work on the nagios at work was hampered by the fact that for Solaris and legacy provisioning is CF2, and the new chef based provisioning is still a work in progress...where I haven't had time to get into any of it yet. So, I had to recreate my CF3 promises for nagios in CF2.
But Friday before last weekend it finally reached the point where it was ready to go live. Though I've been rolling in other wishlist items and smashing bugs in its configuration, and still need to decide what the actual procedure will be for delegating sections of nagios to other groups.
One of the things I had done with new nagios at work, was set up PNP4Nagios...as I had done at home. And, while looking to see if I needed to apply performance tweaks to the work nagios, all the pointers were to have mrtg or cacti collect and plot data from nagiostats. Well, a new work cacti is probably not going to happen anytime soon, and the old cacti(s) are struggling to monitor what they have now (I spent some time a while back trying to tune one them...but its probably partly being hampered by the fact that its mysql can use double the memory that is allocated to the VM. though reducing it from running 2 spine's of 200 threads each...on the 2 CPU VM to a single spine with fewer threads has helped. Something like the boost plugin would probably help in this case, but the version of cacti is pre-PIA. But, it could be a long time before it get's replaced (not sure if upgrade is possible....) Our old cacti is running on a Dell poweredge server that has been out of service over 6 years... with the cacti instance over 8 years old (Jul 8, 2005)....and the OS is RHEL3.
Anyways, it occurs to me that there should be a way to get PNP4Nagios to generate the graphs, and I search around and find check_nagiostats. Though no template for it. Oh, there's a template nagiostats.php, if I create a link for check_nagiostats.php it should get me 'better' graphs. Which is what I have CF2 do at work.
So, recently there was a 'long' 4th of July weekend....on account that I opted to take Friday (the 5th) off as well.
I kind of thought I would tackle a bunch of different projects this weekend, though I've pretty much shelved the idea of re-IP'ng my home network. Perhaps something to do when I get my configuration management better fleshed out.
What I decided was that it looks like its just one last thing on one of the two Ubuntu servers that I'm retiring. So, I figured I'd quickly move that and then go onto the next thing. In the end, I didn't get it completed until Monday night.
For background, some years back...after my return to IRC, I had initially gone with Chatzilla (being that Firefox was my standard browser), which later moved to xulrunner and Chatzilla so it was independent of my browser. Though it was kind of annoying having it running at work and at home, and somewhat confusing for co-workers that ran text based IRC clients in screen somewhere and ssh'd in, etc. Most people that did this, were doing irssi.
So, I initially built it from source and was running on my old RedHat 7.3 server, and that was usable. Later when I setup an Ubuntu box to replace that server (the hardware had previously been SuSE....acting as an internal router for ivs status tracking....) It evolved, in that I would start screen detached from rc.local....which was important since the system would see patches on a regular basis, requiring reboots....which is kind of a reason for switching to FreeBSD.
Over time, I would make little tweaks here and there, to this irssi setup. Like twirssi, doing ssl, and later bitlbee to integrate Facebook chat (came across some stuff that I should add now...)
And, incorporating other tweaks I come across online when there's some problem that becomes sufficient bothersome that I want to address. The one problem I haven't haven't been able to solve is keeping server/system messages confined to the one window. Namely keeping system CRAP going to the system window, and allow channel CRAP to show up in the channel windows....but instead I'll get system CRAP in whatever channel window is active. Which is annoying because its usually the work channel. Where it be just signal and no noise.
I had started to move things more than a month ago, in that I built irssi and bitlbee (including the cfengine3 promise for it...not really much config wise for cfengine to manage for irssi...though I envisioned promising that its running all the time, though irssi has generally been stable everywhere else that I've run it.
But, the I got distracted by other cfengine3 work. Even though things started to get pressing when twirssi stopped working, due to API 1.0 going away...so I had to update Net::Twitter and twirssi. Updating twirssi wasn't that hard to do, but Net::Twitter was a problem, so I opted to remove it and its dependencies and then installing it and its dependencies using CPAN.
I also made note to install
net/p5-Net-Twitter from ports on dbox.
twirssi seems to be having other issues, which I had intended to investigate...perhaps after I move... But, that was like a month ago....
Pages: 1· 2
This weekend, I decided it was time that I checked on port updates in my
/compat/i386 FreeBSD 'system'. Which primarily exists to provide me some ports that don't build on 64-bit, namely emulators/wine-devel and net/nxserver. Don't recall the last time I used nx since I got it working, probably should check to see whether it is still working or not (probably okay on my home system, but might be broke on work one....and might see about setting it up on other work computer too).
Hmmm, hadn't updated ports since May 5th. Start with working through
/usr/ports/UPDATING, run into a problem that on 20130609: AFFECTS: users of audio/flac and any port that depends on it, in that there it thinks perl depends on it (kind of an annoyance I have with dependencies....there can be miles of separation between one port and another port, but everything get's marked as depending on that very bottom port, when it in fact didn't or doesn't... Was annoying in trying to figure out why a port was marked BROKEN / DEPRECATED and not get any attention except that people should stop using it...when 100's of ports on my system depend on it. When it turns out that its one or two ports, had an option set that caused it to depend on it. While the other ports generally don't care what options are enabled in that port, just that the command exists for it...or other reason. Though there are some ports that do care about what options were used, which I had ranted about earlier...and I ran into Thunderbird also having that dependency, resulting in this kluge patch:
But, I let the
portmaster -r flac run aways, with the suspicion that it would break later because perl modules that depend on perl (and not flac) wouldn't get picked up as needed to be re-installed or upgraded, due to 20130612: AFFECTS: users of lang/perl* and any port that depends on it. But, would break the re-install or upgrade of a port somewhere and abort. Which is what I found when I checked on it this morning.
So, I did a
portmaster -R -r perl, and noticed that it seemed to include most of the ports that the previous
portmaster hadn't done. In fact it included all of them. I also peeked in
/usr/local/lib/perl5/site_perl/5.14.2 to see what perl modules had gotten missed....mainly the p5-XML-* ones that caused the previous
portmaster to abort.
Though I probably should've looked to see if the second
portmaster was going to address those, instead of doing them while it was asking to proceed. Because that caused it to abort when re-installing those perl modules (that I had done while it was waiting), but restarting it got things done.
That leaves the latest entry 20130627: AFFECTS: users of ports-mgmt/portmaster, which is just informational and not currently applicable.
Before running in to the flac entry, there had been "20130527: AFFECTS: users of lang/ruby18" which was pretty straight forward, since it only exists as a dependency to
ports-mgmt/portupgrade, which I seldom use now...but I have other scripts that use binaries that come as part of it (namely
portsclean), which I could probably replace with the
portmaster way or something else. But, its not really a priority, plus who knows if I won't decide to go back to using
portupgrade...which has options in its
pkgtools.conf that I haven't found equivalents for with
portmaster, though isn't currently an issue right now. Except perhaps that I'm holding back on updating to the latest
emulators/virtualbox-ose, since I've gotten warnings from various sources to stay away from it.
The other big one is what's the
portmaster equivalent to
Will eventually run into a port that has a line one of these:
Where I'm using
Somehow expected there would be more than 12 ports wanting either client or server... probably missing the occurrences in multiline RUN_DEPENDS or some other way to specify the depend. Since pkg_info says there are 103 ports that depend on the client, and two ports that depend on the server (neither being
mail/roundcube, which are the ports that I'm running on 'zen' using the mysql server. On cbox, there are 73 ports that depend on it, some are obviously true...like
net-mgmt/cacti-spine, but nothing is depending on the server...though it is obviously being used by cacti. I left dbox with the default
databases/mysql55-client...there are 71 ports depending on it.
Meanwhile I have postgres server running on zen, which was a depend of something else that I had since removed....but I haven't stopped or removed postgres yet....
In fact after dealing with ruby, flac and perl....the only ports left to update are:
dialog4ports-0.1.3 < needs updating (index has 0.1.5_1)
freeglut-2.8.0 < needs updating (index has 2.8.1)
portmaster-3.16 < needs updating (index has 3.17)
Might be time to see what else I can get working under wine.
Wonder when I had last updated ports in the
/compat/i386 on my system at work? And, do I want to tackle that from home, on a Sunday instead of other important tasks/projects....
But first...lets see what port got updated since yesterday....
Odd, I seemed to have missed updating to
lang/ruby18 on cbox and dbox....
So, there's this BOINC project out of Poland called Radioactive@Home, where you have a radiation detector hooked up to a computer taking samples, etc. Its my second BOINC project with a hardware sensor. Though I had signed up for this one first...back on June 16, 2011. QuakeCatcherNetwork had come later, but getting a sensor was quick (though there were delays in getting it working, they had switched to a new sensor where they didn't have Linux drivers yet...etc., etc.) But, doing Radioactive@Home took longer as sensors are built in batches, there had been early batches that I missed and I wasn't all that sure at first if I really wanted do go to the hassle of getting one.
But, then another user announced that he would do a group purchase of 50 or so, which it should cut shipping costs quite a bit by having a cheaper large shipment from Poland, plus domestic delivery for the last leg. The way delivery costs go, you can get up to 3 for the delivery charge...though most people only want one....at least initially.
Basically I ordered my first detector around August 2011, and finally received it in March 2012. And, it just runs...though occasionally I'll look to see if anything interesting is recorded (like the interesting trace for around the end of the world....)
Meanwhile, on June 26, 2012 there was an announcement of a new detector...a pretty looking one. My first sensor was a prototype type case with rough cutouts, etc. Not really bad looking, but still plain and crude looking. While the announced sensor looked neat, the kind of thing that I might considering putting on my desk at work....
So, there was basically an announcement that there wasn't going to be another bulk US purchase...so after some thought, I decided this new detector was just too pretty to pass up. So, I ordered one mid to late July, 2012. Got confirmation on July 23rd, 2012. 27 Euros for the detector plus 10 Euros for up to 3 detectors, more than 3 pay for the detector now, get bill for actually shipping cost later. Plus if I use PayPal to specify that I'll pay the transaction fees....
In the previous order, it had been requested that we have PayPal funds to pay for the transaction....or use a check. I had tried to keep a float of cash in my PayPal account....but when it finally came time to pay, there wasn't quite enough to do that, so I opted to just mail a check. For this second order, I went with PayPal and had PayPal add the transaction charges to my total.
First detector cost me $46.25 by personal check. Second detector cost me $47.36 (and conversion and including the transfer charge).... I sent the PayPal money on August 21, 2012.
And, then it was wait and wait and wait. I would check the boards now and then for updates...but it was mostly other people wondering the same thing.
Eventually, I stopped checking in...and kind of forgot all about the sensor. Though I did visit the site briefly, but didn't linger or read the detector threads...which I went to check what platforms the project supported. Because when I had originally ordered, I was down to a Solaris 10/x64 workstation, a Windows box, a first gen MacBookPro (32-bit Core Duo). and a dead Linux machine. Eventually, I got a computer to replace the dead Linux box...but I went with FreeBSD instead, and it eventually displaced the Solaris workstation. In February, 2013 while I was working late on my FreeBSD system, I saw the Windows box update itself and reboot, and then it failed to boot. It had killed itself....pretty much the same way my home Windows box had killed itself in an auto-update in February, 2012. I left it off, not sure what I would do with it....I thought about OmniOS or SmartOS...though it was a first gen i7, so no EPT for KVM. Eventually, I decided to install Ubuntu 12.04LTS on it....where its mainly backup for when my FreeBSD system crashes.... its one thing that new Seagate drives only have 1 year warranties...its another thing that they seem to have trouble lasting that long.....
And, then an iMac 27" appeared on my desk....back when it seemed bleak on getting FreeBSD working as my main workstation....I was talked into getting one. But, FreeBSD remains my main workstation....while there are somethings that the iMac is the only computer I have where things work (like being able to participate in WebEx, Lync, Google Hangouts or Xoom for web conferencing....plus it finally solves having mail staying open while I switch to the appropriate desktop to do whatever....I'm up to 17 now....where there are typically 4 to 12 windows...either of uniform size, or variable size, and some desktops the windows overlap, though that desktop is mainly for tailing logs.... Where I'm up to 2 full desktops and 2 half desktops for that.... Anyways, I had made a quick visit...because I wondered if Mac OS X was a supported platform (it wasn't) or if anybody was using FreeBSD for this project....didn't get any search hits. And, it seemed unlikely that the hardware part would work through the Linux emulation on FreeBSD (especially the Fedora 10, and I'm not sure what the process for converting to the CentOS 6 is, that wouldn't break all the things I'm using Linux emulation for....though it is mostly other BOINC projects.) Though doing the search now, I see that a couple days ago the question got raised....with not much luck on having it find the detector ... but ending with a link to a FreeBSD version of the application.... Though since I have a Linux system at my desk (where is primary purpose is to run VBoxHeadless containing Windows 7, for those occasions where I need to use vSphere Center...and passing the time doing BOINC)...I'll just go with running new detector should it ever arrive...on that.
So, this morning I was was wonder why my nagios was still warning about something that it shouldn't be. I was positive I had changed the warning threshold above where it was. I do an 'svn status' on my work dir, nothing uncommitted. I do an 'svn up' on the cfengine server....no updates, I drill down to the file and its correct (perhaps I need an alias on this side as well...though I usually only use 'cdn' for where my svn work dir is or on the nagios server....though its because at work....where this alias is used in association with nagios as well (where work nagios is not yet managed by cfengine, but was considering it for the new nagios server that I'm trying to set up between fires and stuff at work....except the fact that we're still running cfengine2 is really starting to become a problem......though I wonder if cfengine2 could do it, if it weren't hampered by how former admin had implemented things....The work cfengine made a mess with using it to setup a new system because of weird cross interactions between 'promises' and that the promise wasn't written in the same sequence it was running, things that probably aren't a problem when cfengine was original deployed to promise that nothing ever changes....)
Anyways....I finally hunt through the -v output... which is now not much different than debug noise, and nothing like what verbose used to be in 3.4.4.....no more search for 'E nagios' to find where the start of "BUNDLE nagios" is in the out, and then finding the specific file promise..... what a mess. Its like they don't want you to know what's going wrong....
Turns out I missed some more uses of 'recurse' from cfegine_stdlib.cf, where xdev=true is busted.
It was one of three bugs that I had logged for cfengine 3....#2983. Which was almost immediately flagged as a duplicate of #2965 (3.5.0rc fails to recursively copy files with strange message)...and this morning at 5:03am, my bug was closed as that it indeed seems to be fixed for 3.5.1 (soon...).
Wonder what the definition of soon is....had a previous problem where cfengine was complaining about bad regex....when the default for insert_lines: is that they are 'literal' strings. Which was making it hard to use cfengine 3.4.x to make edits to my crontab files. After putting up with it for a couple of months, I finally visit the bug tracker and find that its already been reported and fixed for next version. But, months and months go by and no new version appears. Though it does seem to be fixed in 3.5.0.
Anyways reading #2965 was interesting.... aside from where the dev? spots another bug in the same code and has that pulled as part of the bug. Also that it was reported against RC, and made it into release. Though I had reported a bug in against an ubuntu 12.04 beta release....and it persisted into the release version, where they debated fixing it because apparently LTS means don't update anything after its release...(though I thought they had said things like firefox would stay current instead of staying fixed at the version at time of release now...) Plus it seemed I had to keep reminding them that my bug was reported before release, so that should be reason enough to release the fix. I'm pretty sure they did, but I hardly use that ubuntu desktop anymore (or any ubuntu desktop....though I did fire up my laptop yesterday, but its because there was a new VirtualBox and I hadn't updated the XP VM on there in quite some time....though I've been thinking of whether a FreeBSD laptop is feasible.)
Someone asks that they have a unit test for this bug. Where the response is a unit test would need a running server, which they don't have (yet)...how long has cfengine been around for them to not be using it? Sure wouldn't want to be somebody who's paying for this.
So does that mean nothing is being tested, and that nobody involved in development use cfengine? Because this was the kind of bug that pretty much anybody that uses cfengine3 would run into. Considering I only have the 3 systems (zen - policyserver, cbox, dbox) at the moment....
Perhaps I'm jaded by having worked for an Enterprise software company and how we did full builds every week, and with full runs of automated and manual QA testing. And, having to create unit tests for less than trivial bugs as part of fix/review before closure process. Though what I'm hearing about Chef...its worse....
Still haven't decided what I'm going to do with my Linux systems....migrating the files from Orac if I were to turn it into FreeBSD is the stumbling block, plus I would lose certain services...some of which might not really be an issue, since its probably time I make the leap to blu-ray. And, either I get another Roku or figure out how to incorporate the smart side of my TV into my life (probably time to finally upgrade my receiver....purchased October 27th, 1999)....
Latest Poopli Updaters -- http://lkc.me/poop
|<< <||> >>|
mdadm voip replaytv «instant streaming» «windows xp» twitter b2evolution boinc 10.04lts freebsd «watch instantly» cfengine3 «powersource 400» appletv lhaven box prescription quicken dvd usb «air purifier» «chicago tardis» dsl upgrade «tivo hd» linux virtualbox «windows 7» raid1 tardis «sans digital» raid netflix progressive tv eyeglasses «amazon prime» cox amazon.com woot tivo zen ebay «doctor who» orac backuppc ups «hd movie» cpap ubuntu