Category: "FreeBSD"

Pages: 2 4


  10:42:00 am, by The Dreamer   , 902 words  
Categories: FreeBSD

Catching up on port update backlog....

This weekend, I decided it was time that I checked on port updates in my /compat/i386 FreeBSD 'system'. Which primarily exists to provide me some ports that don't build on 64-bit, namely emulators/wine-devel and net/nxserver. Don't recall the last time I used nx since I got it working, probably should check to see whether it is still working or not (probably okay on my home system, but might be broke on work one....and might see about setting it up on other work computer too).

Hmmm, hadn't updated ports since May 5th. Start with working through /usr/ports/UPDATING, run into a problem that on 20130609: AFFECTS: users of audio/flac and any port that depends on it, in that there it thinks perl depends on it (kind of an annoyance I have with dependencies....there can be miles of separation between one port and another port, but everything get's marked as depending on that very bottom port, when it in fact didn't or doesn't... Was annoying in trying to figure out why a port was marked BROKEN / DEPRECATED and not get any attention except that people should stop using it...when 100's of ports on my system depend on it. When it turns out that its one or two ports, had an option set that caused it to depend on it. While the other ports generally don't care what options are enabled in that port, just that the command exists for it...or other reason. Though there are some ports that do care about what options were used, which I had ranted about earlier...and I ran into Thunderbird also having that dependency, resulting in this kluge patch:


--- Makefile.orig  2013-06-26 06:01:34.000000000 -0500
+++ Makefile  2013-06-27 20:07:04.142845537 -0500
@@ -98,6 +98,8 @@
+    ${WRKSRC}/mozilla/

But, I let the portmaster -r flac run aways, with the suspicion that it would break later because perl modules that depend on perl (and not flac) wouldn't get picked up as needed to be re-installed or upgraded, due to 20130612: AFFECTS: users of lang/perl* and any port that depends on it. But, would break the re-install or upgrade of a port somewhere and abort. Which is what I found when I checked on it this morning.

So, I did a portmaster -R -r perl, and noticed that it seemed to include most of the ports that the previous portmaster hadn't done. In fact it included all of them. I also peeked in /usr/local/lib/perl5/5.14.2 and /usr/local/lib/perl5/site_perl/5.14.2 to see what perl modules had gotten missed....mainly the p5-XML-* ones that caused the previous portmaster to abort.

Though I probably should've looked to see if the second portmaster was going to address those, instead of doing them while it was asking to proceed. Because that caused it to abort when re-installing those perl modules (that I had done while it was waiting), but restarting it got things done.

That leaves the latest entry 20130627: AFFECTS: users of ports-mgmt/portmaster, which is just informational and not currently applicable.

Before running in to the flac entry, there had been "20130527: AFFECTS: users of lang/ruby18" which was pretty straight forward, since it only exists as a dependency to ports-mgmt/portupgrade, which I seldom use now...but I have other scripts that use binaries that come as part of it (namely portsclean), which I could probably replace with the portmaster way or something else. But, its not really a priority, plus who knows if I won't decide to go back to using portupgrade...which has options in its pkgtools.conf that I haven't found equivalents for with portmaster, though isn't currently an issue right now. Except perhaps that I'm holding back on updating to the latest emulators/virtualbox-ose, since I've gotten warnings from various sources to stay away from it.

The other big one is what's the portmaster equivalent to portupgrade's ALT_PKGDEP?

Will eventually run into a port that has a line one of these:


RUN_DEPENDS+=   mysql-server>=0:${PORTSDIR}/databases/mysql${MYSQL_VER}-server
RUN_DEPENDS+=   mysql-server>=0:${PORTSDIR}/databases/mysql${MYSQL_VER}-server  
RUN_DEPENDS+=   mysqld_safe:${PORTSDIR}/databases/mysql55-server
RUN_DEPENDS+=   ${LOCALBASE}/bin/mysqld_safe:${PORTSDIR}/databases/mysql55-server
RUN_DEPENDS=    ${LOCALBASE}/libexec/mysqld:${PORTSDIR}/databases/mysql${MYSQL_VER}-server
RUN_DEPENDS+=   ${LOCALBASE}/libexec/mysqld:${PORTSDIR}/databases/mysql${MYSQL_VER}-server
RUN_DEPENDS+=   ${LOCALBASE}/libexec/mysqld:${PORTSDIR}/databases/mysql${MYSQL_VER}-server
RUN_DEPENDS+=   mysql-server>=0:${PORTSDIR}/databases/mysql${WANT_MYSQL_VER}-server
RUN_DEPENDS+=   mysql-server>=0:${PORTSDIR}/databases/mysql${MYSQL_VER}-server
RUN_DEPENDS+=   mysql-server>=0:${PORTSDIR}/databases/mysql${WANT_MYSQL_VER}-server
LIB_DEPENDS+=   mysqlclient:${PORTSDIR}/databases/mysql55-client
RUN_DEPENDS+=   mysqld_safe:${PORTSDIR}/databases/mysql${MYSQL_VER}-server

Where I'm using databases/percona55-server & databases/percona55-client now....

Somehow expected there would be more than 12 ports wanting either client or server... probably missing the occurrences in multiline RUN_DEPENDS or some other way to specify the depend. Since pkg_info says there are 103 ports that depend on the client, and two ports that depend on the server (neither being www/owncloud or mail/roundcube, which are the ports that I'm running on 'zen' using the mysql server. On cbox, there are 73 ports that depend on it, some are obviously net-mgmt/cacti and net-mgmt/cacti-spine, but nothing is depending on the server...though it is obviously being used by cacti. I left dbox with the default databases/mysql55-client...there are 71 ports depending on it.

Meanwhile I have postgres server running on zen, which was a depend of something else that I had since removed....but I haven't stopped or removed postgres yet....

In fact after dealing with ruby, flac and perl....the only ports left to update are:

dialog4ports-0.1.3 < needs updating (index has 0.1.5_1)
freeglut-2.8.0 < needs updating (index has 2.8.1)
portmaster-3.16 < needs updating (index has 3.17)

Might be time to see what else I can get working under wine.

Wonder when I had last updated ports in the /compat/i386 on my system at work? And, do I want to tackle that from home, on a Sunday instead of other important tasks/projects....

But first...lets see what port got updated since yesterday....


Odd, I seemed to have missed updating to lang/ruby19 from lang/ruby18 on cbox and dbox....


  08:57:00 am, by The Dreamer   , 838 words  
Categories: Home Theatre, Software, Momitsu V880N, FreeBSD, CFEngine

This just in, cfengine developers don't test or use cfengine!?

So, this morning I was was wonder why my nagios was still warning about something that it shouldn't be. I was positive I had changed the warning threshold above where it was. I do an 'svn status' on my work dir, nothing uncommitted. I do an 'svn up' on the cfengine updates, I drill down to the file and its correct (perhaps I need an alias on this side as well...though I usually only use 'cdn' for where my svn work dir is or on the nagios server....though its because at work....where this alias is used in association with nagios as well (where work nagios is not yet managed by cfengine, but was considering it for the new nagios server that I'm trying to set up between fires and stuff at work....except the fact that we're still running cfengine2 is really starting to become a problem......though I wonder if cfengine2 could do it, if it weren't hampered by how former admin had implemented things....The work cfengine made a mess with using it to setup a new system because of weird cross interactions between 'promises' and that the promise wasn't written in the same sequence it was running, things that probably aren't a problem when cfengine was original deployed to promise that nothing ever changes....)

Anyways....I finally hunt through the -v output... which is now not much different than debug noise, and nothing like what verbose used to be in more search for 'E nagios' to find where the start of "BUNDLE nagios" is in the out, and then finding the specific file promise..... what a mess. Its like they don't want you to know what's going wrong....

Turns out I missed some more uses of 'recurse' from, where xdev=true is busted.

It was one of three bugs that I had logged for cfengine 3....#2983. Which was almost immediately flagged as a duplicate of #2965 (3.5.0rc fails to recursively copy files with strange message)...and this morning at 5:03am, my bug was closed as that it indeed seems to be fixed for 3.5.1 (soon...).

Wonder what the definition of soon is....had a previous problem where cfengine was complaining about bad regex....when the default for insert_lines: is that they are 'literal' strings. Which was making it hard to use cfengine 3.4.x to make edits to my crontab files. After putting up with it for a couple of months, I finally visit the bug tracker and find that its already been reported and fixed for next version. But, months and months go by and no new version appears. Though it does seem to be fixed in 3.5.0.

Anyways reading #2965 was interesting.... aside from where the dev? spots another bug in the same code and has that pulled as part of the bug. Also that it was reported against RC, and made it into release. Though I had reported a bug in against an ubuntu 12.04 beta release....and it persisted into the release version, where they debated fixing it because apparently LTS means don't update anything after its release...(though I thought they had said things like firefox would stay current instead of staying fixed at the version at time of release now...) Plus it seemed I had to keep reminding them that my bug was reported before release, so that should be reason enough to release the fix. I'm pretty sure they did, but I hardly use that ubuntu desktop anymore (or any ubuntu desktop....though I did fire up my laptop yesterday, but its because there was a new VirtualBox and I hadn't updated the XP VM on there in quite some time....though I've been thinking of whether a FreeBSD laptop is feasible.)

Someone asks that they have a unit test for this bug. Where the response is a unit test would need a running server, which they don't have (yet) long has cfengine been around for them to not be using it? Sure wouldn't want to be somebody who's paying for this.

So does that mean nothing is being tested, and that nobody involved in development use cfengine? Because this was the kind of bug that pretty much anybody that uses cfengine3 would run into. Considering I only have the 3 systems (zen - policyserver, cbox, dbox) at the moment....

Perhaps I'm jaded by having worked for an Enterprise software company and how we did full builds every week, and with full runs of automated and manual QA testing. And, having to create unit tests for less than trivial bugs as part of fix/review before closure process. Though what I'm hearing about Chef...its worse....

Still haven't decided what I'm going to do with my Linux systems....migrating the files from Orac if I were to turn it into FreeBSD is the stumbling block, plus I would lose certain services...some of which might not really be an issue, since its probably time I make the leap to blu-ray. And, either I get another Roku or figure out how to incorporate the smart side of my TV into my life (probably time to finally upgrade my receiver....purchased October 27th, 1999)....


  08:48:00 am, by The Dreamer   , 150 words  
Categories: FreeBSD

amd[###]: No map entry for <old_share>

For almost a year, every 3 seconds in /var/log/daemon.log there are two lines:

amd[####]: No map entry for <old_share1>
amd[####]: No map entry for <old_share2>

Every now and then, I would spend time hunting around my filesystem trying to figure out what part of amd might still be holding onto the old information...its not in any of the config files and not in any of its directories, its not making symlinks for it anywhere.

I was about finally submit and try asking on a mailing list....when while I was compiling how I did things into the message, I expanded my search to make really sure that there weren't any lingering references....when I found that I had two symlinks in my homedir pointing to the old shares.

Once I removed those symlinks....the messages stopped.

I guess the gam_server process that is watching my home directory was polling those symlinks every 3 seconds....



  04:04:00 pm, by The Dreamer   , 1123 words  
Categories: Software, FreeBSD, CFEngine

cbox/dbox cfengine update also full of fail

First I had saved, so that I could invoke cf-agent from /var/cfengine/bin to pull in the new cfengine-3.5.0 binaries and pull up the new inputs from my policy server.

Except I forgot to commit the 'bundle agent foo' kluge, and I had done an 'svn revert ...' to undo all the fiddling I had been doing on the policy server.

But, after I make the change, cbox/dbox refuse to copy up the new '' file. I try running things verbose, 'cf-agent -v > out', but there's no out file??? &#58;&#63;&#63;&#58; Did I slip? Am I losing my mind?

Guess I should do it in another directory, because the update does its removing my 'out' file. &#58;&#111;&#111;&#112;&#115;&#58;

So, its saying this:

2013-06-15T14:40:55-0500  verbose: Entering directory '/var/cfengine/inputs'
2013-06-15T14:40:55-0500  verbose: Device change from 1242801830 to 843968349
2013-06-15T14:40:55-0500  verbose: Skipping '/var/cfengine/inputs/' on different device
2013-06-15T14:40:55-0500  verbose: Device change from 1242801830 to 843968349
2013-06-15T14:40:55-0500  verbose: Skipping '/var/cfengine/inputs/' on different device

What are those device it saying that because the file is remote that it won't copy it? After some looking around....I see that 'u_recurse("inf")' is:

body depth_search u_recurse(d) {
        depth   => "$(d)";
        xdev    => "true";

Which seems perfectly reasonable that I want it to recurse the destination directory and not cross devices. But apparently, it now means don't recurse into devices that different than the source directory's device???

I look around to see if I had done the 'xdev' line or if that was from where I based my initial setup from....and find that its what was giving over on Unix Heaven -

That change in behavior totally doesn't make any sense? But, enough hair pulling.... let's get things running again on cbox & dbox.

Hmmm, now its spewing warning messages:

2013-06-15T14:47:31-0500  warning: Setting replace-occurrences policy to 'first' is not convergent
2013-06-15T14:47:32-0500  warning: Setting replace-occurrences policy to 'first' is not convergent
2013-06-15T14:47:32-0500  warning: Setting replace-occurrences policy to 'first' is not convergent
2013-06-15T14:47:33-0500  warning: Setting replace-occurrences policy to 'first' is not convergent
2013-06-15T14:47:33-0500  warning: Setting replace-occurrences policy to 'first' is not convergent
2013-06-15T14:47:35-0500  warning: Setting replace-occurrences policy to 'first' is not convergent

I didn't before...and there's only one occurrence, so it is convergent. And, my use of 'replace_with => With("...")' for 'replace_patterns:' was lifted from -- So, in With(), I should change 'occurrences' from "first" to "all"....why have this attribute if its purpose is just to annoy now?

Its already annoying enough that -q is gone....but -K remains? If I'm running a cf-agent manual to speed up picking up a change....I'd like having to wait splaytime, but I don't want to ignore locks...because if I'm close to when cf-execd wants to results in a big mess, especially for promises using single copy nirvana.... cf-agent doesn't distinguish on why the specific copy failed when there's a collision in this case, so it causes a less specific version to get that it has to fix it in the next run.

This causes problems for things like DNS and NTP. The specific is cbox is an NTP server that polls external servers in the the generic NTP config is to use cbox and dbox as NTP servers....which has resulted in wild client oscillations, though both my dsl and cable connections have been rather unstable lately...though dsl will just have periodic drops, cable goes out until I reboot my cable modem....and the fact that using two NTP servers is probably the worst combination. But, I don't think I could get away with 4+ NTP servers on my home network. DNS....well, specific is cbox is primary authoritative, dbox is secondary authoritative, and generic is recursive caching resolver....similar to how I did DNS servers under cfengine at work.....

4+ justification -

NTP likes to estimate the errors of all clocks. Therefore all NTP servers return the time together with an estimate of the current error. When using multiple time servers, NTP also wants these servers to agree on some time, meaning there must be one error interval where the correct time must be.

Number of servers:

  1. Always trusted, even when its totally wrong
  2. If they differ, how do you break the tie?
  3. A server failing results in above.

So, 4+ servers for reliable accuracy.

Whether the total should be odd or some other sequence, is another debate.

But three is a good place to start, and you can progress to three-groups-of-three if you feel the need.

three-groups-of-four? four-groups-of-four?

Hmmm, I don't know why I had never done 'cf-agent -v > out' in /var/cfengine/inputs on cbox/dbox before....but often do it on my policy host. I guess I don't normally need to be in /var/cfengine/inputs on cbox/dbox when I'm troubleshooting cfengine.

Hmmm, so the verbose run, I only see two places with warnings??? That seems odd.

cf3>     .........................................................
cf3>     Promise's handle:
cf3>     Promise made by: "name="tor""
cf3>     .........................................................
cf3>  -> Looking at pattern name="tor"
cf3> WARNING! Setting replace-occurrences policy to "first" is not convergent
cf3>  -> Verifying replacement of "name="tor"" with "name="tor2"" (2)
cf3>  -> Replace first occurrence only (warning, this is not a convergent policy)
cf3>  -> Replaced pattern "name="tor"" in /usr/local/etc/rc.d/tor2
cf3>  -> << (2)"name="tor2""
cf3>  -> >> (2)"name="tor2""
cf3>  -> Replace first occurrence only (warning, this is not a convergent policy) makes even less sense.... this edit promise is about creating /usr/local/etc/rc.d/tor2 from /usr/local/etc/rc.d/tor. Since the starting file is fixed, the first occurrence is inherently replacing it is convergent. So, its just there to annoy.

And, wait....its also having issues using recurse() from, it has xdev => "true"; set as well. And, that body is unchanged from the previous version.

Oh, I suppose I could remove splaytime.... its not like that I have a huge number of systems....

Wait... the tor2 edit_lines also has an if_elapsed("60"), so warnings are from other edits that were probably skipped. Guess, I'll have to risk a -K to find all the occurrences of the warning.

Though its strange that I'm only seeing the messages on dbox, even though the promise exists for both cbox & dbox....

OTOH, on cbox it did catch an error.... where it was promise that "/usr/local/www/apache22/data/tivo/.", etc. was accessible to the webserver Except I had typed "/usr/local/www/apache2/data/tivo/.", etc. There was no error/warning about skipping this promise before. Though in 3.5.0, it gave an error that it couldn't chdir to this directory...which delayed me into noticing the actual reason for the error.

And, once again...another wasted Saturday.... I didn't really need to go to the mall and spend money I don't have, I suppose. &#58;&#104;&#109;&#109;&#58;

Now before I can go back to just doing 'portmaster -a' to keep current....I still need to decide the fate for the databases/mysql55-server update.... I suppose while cfengine was down, I could've done the upgrade if the decisions was to stay put. But, still wondering if I want to try percona, or perhaps better yet, maria! I wonder what my friend Maria is up to these days....

  01:15:00 pm, by The Dreamer   , 694 words  
Categories: Software, FreeBSD, CFEngine

Meanwhile upgrading cfengine-3.4.4 to cfengine-3.5.0 not going well.

Upgrading the port was no problem....but it broke my cfengine. Why? The port puts the cfengine binaries in /usr/local/sbin, while the cfengine practice is that it has a private copy in /var/cfengine/bin. Which would be fine if the binaries didn't have shared library dependencies. Which they do, specifically which is gone in cfengine-3.5.0...there's a

Though before I discovered this problem, I first wanted to make some tweaks to so that I would have some indication that it had copied up new binaries from /usr/local/sbin to /var/cfengine/bin, since I noticed that files there newer than expected. Though I probably just rebuilt the same version port because a dependency had updated and /usr/ports/UPDATING indicated that I need to do that.

This probably is why at work, the person that setup our cfengine 2 went to extreme effort to create static cfengine executables...ignoring that such things are officially not supported on Solaris. Though we seemed to get away with running those executables, built on a Sol10u3 sun4u system...on systems more current up to Sol10u11, and a few Sol11 systems and systems that are sun4v architecture.

In a past life...we had run into a statically built executable (the installer) not working on our first UltraSPARC-III system (Sun v280r)...trying to recall what our build machine was back then.... my recollection says we only had the SPARCserver 20 and SPARCstation 10, before that. Though as I recall, we had to wait for a patch from Sun as well as rebuild the executable shared on the SPARCserver have it work. It wasn't long after that though that we retired support for sun4m, changing minimum requirements. Wonder if the application has become 64-bit yet? But, for ABI backwards compatibility claim to work, the executable needs to be built that it'll find the libraries provided on newer systems to allow older executables to still work.....

portmaster probably didn't know that it should save /usr/local/libexec/cfengine/, though would the old executables know how to find the library when its moved aside? (I do have SAVE_SHARED=wopt uncommented in my portmaster.rc file).

Occurs to me that I could just restore the file from backup, it would allow me to run

and get me to where everything should work again.

Though before I did that, I had invoked cf-promises (the one in my path -- /usr/local/sbin), and it complains about Guess it doesn't like the old, the new one isn't where the old one was here instead --> /usr/local/share/cfengine/CoreBase/libraries/ I do a quick look at what's in it....mainly to make sure that bundles/bodies that I use are still there...and notice some interesting new ones....such as a package_method of freebsd_portmaster, someday I should look at cfengine3 to do port/package promising....

But first get cfengine working on policyhost, hopefully the other servers (at 3.4.4) are still working.....guess not, 3.4.4 doesn't like the 3.5.0 file. But, cf-promises is also not happy with some of my other promises....

Guess I'll update those while I get policyhost working again.


Or perhaps I need to revert....

root@zen:/var/cfengine/inputs 317# cf-agent
2013-06-15T13:22:53-0500    error: Bundle 'crontab' listed in the bundlesequence is not a defined bundle
2013-06-15T13:22:53-0500    error: Fatal CFEngine error: Errors in promise bundles
1.755u 0.113s 0:01.94 95.8%     172+2501k 133+12io 1pf+0w
root@zen:/var/cfengine/inputs 318# 
# cf-agent -v
2013-06-15T14:00:57-0500  verbose: Parsing file '/var/cfengine/inputs/'

Its there, why's it not working.... 'cf-agent -d' doesn't work, but it will only do failsafe....

Full story »

  09:12:00 am, by The Dreamer   , 390 words  
Categories: Software, FreeBSD

Perl update continued

For some reason I had cd'd into /usr/local/lib/perl5 on dbox and noticed that 5.16.2 was still present...well 5.12.4 was still on zen after I the upgrade to 5.14.2... and it just had whatis files. But, I went and looked inside, and found more than just whatis files.

Using 'pkg_info -W', I found that I had other ports that had installed perl modules that didn't start with 'p5-' or depend on ''.

So, off to rebuild those ports.

On dbox/cbox it was just databases/rrdtool, print/ some stray files left by already updated ports or removed ports. But, on zen there was a much bigger list of ports:

japanese/p5-Jcode (which was missed, because the package name is ja-p5-Jcode-)

Hmmm, probably need to update my i386 space, which is going to be wrong now...because uses the name make.conf of 'global', and I haven't updated it in a long time.... Not since May 4th. emulators/wine-devel has been updated since then, so I guess I'll have to tackle it sooner than later.... Especially, since I'm thinking of making another attempt to see if I can get other apps running in wine versus VirtualBox....

Full story »


  08:36:00 am, by The Dreamer   , 959 words  
Categories: Software, FreeBSD, CFEngine

Perl update + cfengine3 and subversion == great big mess on FreeBSD

So being up ridiculously early this morning, because I got up at the right time during the night...but forgot to take my second dose, and too late to take it when I woke again.

So, I thought I would tackle updating to the latest perl on zen. Choose not to just do a blind 'portmaster -r perl', since that would include anything that depends on being able to run perl or depends on something that depends on perl...possibly many levels down. Somebody should come up with a simple way to only rebuild the ports that directly depend on another port....

What I decided to do what after updating perl, was 'portmaster p5-*' then do a pkg_libchk to see what ports are missing and redo those.

I did this first by logging into my machine at work (mew) and it went quickly and worked pretty cleanly. Though there are only half the number p5- ports, and less ports missing as well.

So, back on zen...I updated perl and then started the 'portmaster p5-*', when it stopped. A port is dependent on /usr/local/bin/perl5.14.2, which is not found because I just updated to 5.14.4!

How do you have ports that are dependent on a specific version, especially since its possible to have a different version of perl. When I first installed zen, the default perl was 5.12.x...but I had elected to upgrade to 5.14.x when that became the new default. OTOH, when I setup cbox/dbox, I opted to go with 5.16.x...and at work, mew had started before zen, so it was also 5.12.x initially...but I choose to jump that up to 5.16.x (actually no .x at first, but it was very quickly followed by a .1...which inflicted a lot of pain again....though in this latest update they have switched to just major.minor for module path, which means no more pain until its time to upgrade to a newer minor release....

The first casualty was devel/p5-B-Keywords ...and I find its depended on by textproc/p5-Perl-Critic and textproc/p5-Test-Perl-Critic....with the last port being the leaf. So that seems to be that it was probably just a build depend for something I had installed long ago (since it doesn't sound like anything I could call in any of my own perl scripts.) So, I figure I'll just delete those packages and carry on.

Nope, the next port also fails with the same strange demand.


After hunting around a bit, to try and figure out what is making this port think it depends on the previous version of dawns on me that the perl port updates /etc/make.conf as to what version of perl is installed.

And, I have


being managed by cfengine3. &#58;&#111;&#111;&#112;&#115;&#58;

So, I go update the file, and try to svn commit the change....which blows up because the perl modules needed for the commit hook haven't been updated yet.

Well, guess I'll stop cfengine3 from reverting my /etc/make.conf (by disabling the promise from the root side) Though IIRC now, the commit hook is mainly to prevent root from doing commits into my subversion repository, instead of the put subversion on an NFS filesystem and have rootsquash prevent root from being able to write into the repository, that we do at work.

Phew....I can commit updated /etc/make.conf and let cfengine promise it. Perhaps in a future project, I'll see if there's some way to have cfengine set that dynamically or something.

Though should I have cfengine promising all of /etc/make.conf ? There's a block in /etc/make.conf that is the same across all my FreeBSD systems, since its the ports that I come across that don't like 'make -j#'. And, it was intended to have cfengine promise that part, though it there have been other additions that I want the same on all my FreeBSD systems, like the override on modules in net-snmp, that I mention when I ran into some cacti. Though there's value in having cfengine having the whole file....or rather its that the files are in subversion.

Next up...nagios has alerted me that spamassassin has stopped, yeah, I guess that would happen. Which means there's going to be a big chunk of spam in all my mailboxes (126...I need to find a good way to aggregate them back so that I can read them all ... from roundcube) module rebuilds are done, and spamassassin seems to be running again (probably because cfengine3 has a promise to keep for that) Though might work better if I do an 'sa-compile' to make sure that part is right...though seemed to me it was only major.minor....

Full story »


  08:42:00 pm, by The Dreamer   , 1176 words  
Categories: Software, Computer, Storage, FreeBSD, CFEngine

Another weekend seems to be slipping away on me....

And, its the same time suck....cacti.

Last weekend got away from me, because I to make another attempt to improve cacti performance. I had tried adding 3 more devices to it, and that sent it over the limit.

I tried the boost plugin....but it didn't help, and only made things more complicated and failure prone. Evidently, updating rrd files is not a constraint on my cacti server. Probably because of running on an SSD.

I made another stab at getting the percona monitoring scripts to actually work under script server, but that failed. I suspect the scripts aren't reentrant, because of their use of global variables and relying on 'exit' to cleanup things it allocates or opens.

I had blown some previous weekend when I had tried to build the most recent version of hiphop to maybe compile the scripts, but after all the work in figuring out how to compile the latest 2.0x would SEGV, just as the older lang/hiphop-php did after resolving the problem of building with the current boost (a template had changed to need a static method, meaning old code won't link with newer boost libraries without a definition of this.) And, this is beyond what I have in my wheelhouse to try to fix.

During the week, I had come across some more articles on tuning FreeBSD, namely a discussion of kern.hz for desktop vs servers. Where it being 1000 by default is good for desktops, but the historical setting of 100 being what to use for servers. Though IIRC, ubuntu uses 250 HZ for desktops and 100 HZ for servers, it also doesn't do preemption in its server kernel along with other changes (wonder if some of those would apply to FreeBSD?) Though modern kernels have been moving to be tickless. Which I thought was in for FreeBSD 9, though the more correct term is dynamic tick mode...and which is more about not doing unnecessary work when things are idle. Which isn't the case with 'cbox'. So, perhaps, fiddling with kern.hz and other sysctls might still be relevant. Though haven't really found anything detailed/complete on what would apply to my situation.

So, I thought I would give kern.hz=100 a shot.

At first it seemed to make a improvement in how long to complete a poll, but the load was lower. Until I realized that a service had failed to start after reboot. I had only run the rc script by hand, I hadn't tested it in a reboot situation. And, its not an rc was used to be a single line in rc.local that worked on ubuntu and FreeBSD (except on one of the Ubuntu systems it results in a ton of zombie processes, so making it an init.d script that I could call restart on happened.

So, I spent quite a lot of time reworking it into what will hopefully be an accept rc script. One thing I had changed was that instead of using a pipe ('|') which was causing the process after the pipe to respawn and turn the previous process into a zombie each time the log file was rotated and "tail -F" announced the switch. And, this was while I was moving the service to FreeBSD (and management under cfengine 3.)

Though looking at my cacti graphs later....while the service had failed to start after reboot, it turned out to have been running for sometime, until I had broken it completely in trying to rc-ify the init script. Will, duh....I had cfengine set to promise that the process was running, and it had repaired that it hadn't started after the reboot.

Another thing I had done with I had init-ified the startup of this service, was I switched from using pipe ('|') to using a fifo, which addressed the respawning and zombie problem and eliminated the original reason to have an init.d script....

While the init.d script had worked on was just starting the two processes with '&' on the end then exiting. FreeBSD's rc subroutines do a bit more than that. So things weren't working. The problem was that even though I was using daemon instead of '&', so that daemon would capture the pid and make a pidfile. seems daemon wants the process it manages to be fully working before it'll detach. But, the process is blocked until there's a sink on the other end of the fifo. (does sink fit was the name for the fifo's reader?) I first wonder if I could just flip the two around, but I suspect starting the read process first would be just as blocked until the write process is started. So, I cheated by doing a prestart of the writing process and only tracking the reading process.

Though it took a bit more work to get the 'status' action to work....eventually found I needed to define 'interpreter' since the reading process is a perl script. And, the check_pidfile does more than just check to see if there's a process at the pid, but that its the right process. And, it distinguishes between arg0 and the rest.

Pretty slick...guess I need to do a more thorough reading of the various FreeBSD handbooks, etc. Of course, it has been 13+ years between when I first played with FreeBSD to its take over of my life now.

As for the had made a small difference, but no improvement on cacti system stats. Basically the load average fluctuates a bit more and the CPU utilization seems to be a bit lower...though it could because the 4 lines of the cacti graph aren't so close to each other now.

Meanwhile...I noticed that one of the block rules in my firewall had a much higher count than I would expect, so I think I was about to get logging configured to see what that's about.....(which I was working on when I remembered that I hadn't rebooted after making the kern.hz change to /boot/loader.conf yesterday...the commit also picked up files that I had touched while working on moving the one remaining application on 'box', though that may get delayed to another weekend....perhaps the 4 day one coming up.)

I had set cf-execd's schedule to be really infrequent (3 times an hour), because I was doing a lot of testing and cf-agent collisions are messy....messier than they were in cfengine 2 (in 2 it usually just failed to connect and aborted, in 3 it would keep trying and splatter bits and pieces everywhere....which is bad when there are parts using single copy nirvana. resulting in services getting less specific configs, until the next run.

But, I sort of brought back dynamic bundle sequences.... but key off of "from_cfexecd", so I can test my new promise with less problems of colliding with established promises. Though there are other areas where things still get messy.... need to clean up some of the promises I had based on how things were done at work, so that the promises are more standalone.

Kind of weird using my home cfengine 3 setup, and other admin activities, as the means to break the bad habits I had picked up at work....

  07:05:00 pm, by The Dreamer   , 2118 words  
Categories: Software, Computer, BOINC, FreeBSD

sqlite3 SECURE_DELETE and Firefox

So a few days ago, databases/sqlite3 was updated in ports. And, in the portmaster run, I was faced with its config dialog. Think I had gone with the defaults previously, but decided to take a closer look this time. Saw that SECURE_DELETE, with the description "Overwrite deleted information with zeros". That sounds like a waste of time, I should probably turn that off.

A quick online search, I found this:

The secure_delete setting causes deleted content to be overwritten with zeros. There is a small performance penalty for this since additional I/O must occur. On the other hand, secure_delete can prevent sensitive information from lingering in unused parts of the database file after it has allegedly been deleted.

Yup, definitely just a waste of time...even says so. The OTOH, wrong. Why? Because I'm running my FreeBSD system on ZFS, which is copy-on-write. Its just spinning my wheels create a new copy of the file filled with zeros, and the old file is just unlinked somewhere intact, and then unlinking that new copy that it had filled with zeros. When just unlinking the old file achieves the same thing faster.

Of course, what happens a little while later there's an update to www/firefox in ports, where the configure fails because sqlite3 wasn't built with SQLITE_SECURE_DELETE. Well, I'm not turning on stupid for Firefox...I'm already disappointed by how slow it has become (and PGO seems to be broken again), to where chrome/chromium is now my everywhere browser. Which is working on the most part now that I don't have a Solaris workstation as part of my everywhere.

Well, its just configure that is testing for it and there should be a way to turn it off. Hmmm, no option to do that, guess I'll have to later the configure script. Do I inject a patch into the files directory? Looks like the file is being adjusted elsewhere, though I don't see a patch in files that is working on it. Okay, its the post-patch target in the Makefile. Can I just add to that? Guess the way to do it is to change AC_MSG_ERROR to something that doesn't terminate the configure. Unfortunately I have portmaster.rc opertion "PM_DEL_BUILD_ONLY=pm_dbo" uncommented, so can't quickly look what AC_MSG_??? I could use. Find some online documentation, that describes AC_MSG_CHECKING, AC_MSG_RESULT, AC_MSG_NOTICE, AC_MSG_ERROR, AC_MSG_FAILURE, AC_MSG_WARN...first 3 are messages that aren't emitted if '--quiet' or '--silent' options are used. I don't think those options are used normally, but seems like a good idea to me. I'll use AC_MSG_NOTICE (though now that think of it, AC_MSG_RESULT is probably valid, since it was an AC_MSG_CHECKING that comes before the AC_MSG_ERROR...)

Well, AC_MSG_NOTICE is undefined. Guess the autoconf being used is different than the one I found online. AC_MSG_ERROR and AC_MSG_FAILURE cause exits, but AC_MSG_WARN writes to stderr and continues. Guess, that's what I'll have to use then.

So, I insert the change, and create quick diff so that I can reapply it as a patch for next time....


--- Makefile.orig  2013-06-03 17:45:05.000000000 -0500
+++ Makefile  2013-06-04 18:22:37.335175851 -0500
@@ -89,6 +89,7 @@
  @${REINPLACE_CMD} -e '/MOZPNG/s/=[0-9]*/=10511/' \
    -e '/^SQLITE_VERSION/s/=.*/=' \
+    -e '/with SQLITE_SECURE_DELETE/s/_ERROR/_WARN/' \

Pages: 1· 2


  08:20:00 pm, by The Dreamer   , 509 words  
Categories: Home Theatre, WiFi, Storage, Samsung UN50ES6500F, FreeBSD, VirtualBox

I got another HSTi Wireless Media Stick


I had acquired my first HSTI Wireless Media Stick back on April 24th, 2011 (from a marketplace seller on took some time to arrive and I blogged about it on April 30th, 2011 -- Getting local content to show on my Roku XDS.

Now my Roku has moved to my other HD display (24" 1080p), but that was before my old Samsung HDTV (43" 720p) regenerated into a Samsung Smart 3DTV (50" 1080p).... so I'm back to living room TV being my main viewing device for all content, though TiVo has a box that I could connect to the smaller display to access my TiVo content there....which I may want to get at a later date. The majority of content I watch is from of these days I need to setup my blu-ray player so I can get back into watching DVDs (not sure when I'll have blu-ray discs, but need to get back on my netflix backlog).

But, the other day I had an mp4 file that I needed to play....and I thought I should get someway to do that to my 50" HD display... Had to settle with using the Roku for a bit. And, decided that the plan will be to acquire another HSTi Wireless Media Stick.

After searching around online, eventually found that ordering directly from HSTi was the only option now. So, I ordered another one on May 17th. It arrived yesterday. But, I didn't set it up until I got home from work today. Somehow I had forgotten again that HSTi is in Calgary, Alberta. Not that I'll be going up there in the immediate future....

Anyways, no big surprises...good thing I had solved my USB2.0 and Windows 7 in VirtualBox on FreeBSD problem (got a Silex SX-DS-4000U2). I'm sharing TARDIS from orac to it still, since I don't yet have a replica on zen yet (need to free up space). Though when I moved the HSTi Wireless Media Stick it had forgotten the share, so had to pull up web again and add it back. Interesting that its graphic shows the itself, while the graphic on my older stick is that of the original Wireless Media Stick (it used to be the correct graphic, but after an update it keeps showing the graphic of the older version.) Though this one came from the factory with the latest firmware, so who knows what'll happen when there's an update.

Was interesting using the SmartTV to view it, though wonder if it'll be a problem with it constantly discovering the stick every time I turn it on and presenting dialogs and such. Afterwards I tried the Amazon app to see if that was working was still saying I needed to update my TV, though this time there was an update....and now that works. Which might make it interesting to decide on what I should do. The only problem with using the SmartTV versus some other viewer....the TV is only 2.1 audio while other routes I can get 5.1, and its a different input on my receiver....

Oh well, back to other projects....

2 4

Now instead of subjecting some poor random forum to a long rambling thought, I will try to consolidate those things into this blog where they can be more easily ignored profess to be collected thoughts from my mind.

Latest Poopli Updaters --


There are 20 years 5 months 27 days 1 hour 22 minutes and 16 seconds until the end of time.
And, it has been 4 years 7 months 1 day 12 hours 40 minutes and 40 seconds since The Doctor saved us all from the end of the World!


July 2017
Mon Tue Wed Thu Fri Sat Sun
 << <   > >>
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30


  XML Feeds

Who's Online?

  • Guest Users: 38
This seal is issued to by StopTheHacker Inc.
free blog software

hosted by
Green Web Hosting! This site hosted by DreamHost.

monitored by
Monitored by eXternalTest
SiteUptime Web Site Monitoring Service
website uptime