Pages: 1 2 4 6 7 8 9 10 11 ... 141

06/21/13

  08:48:00 am, by The Dreamer   , 150 words  
Categories: FreeBSD

amd[###]: No map entry for <old_share>

For almost a year, every 3 seconds in /var/log/daemon.log there are two lines:

amd[####]: No map entry for <old_share1>
amd[####]: No map entry for <old_share2>

Every now and then, I would spend time hunting around my filesystem trying to figure out what part of amd might still be holding onto the old information...its not in any of the config files and not in any of its directories, its not making symlinks for it anywhere.

I was about finally submit and try asking on a mailing list....when while I was compiling how I did things into the message, I expanded my search to make really sure that there weren't any lingering references....when I found that I had two symlinks in my homedir pointing to the old shares.

Once I removed those symlinks....the messages stopped.

I guess the gam_server process that is watching my home directory was polling those symlinks every 3 seconds....

&#58;&#111;&#111;&#112;&#115;&#58;

06/15/13

  04:04:00 pm, by The Dreamer   , 1123 words  
Categories: Software, FreeBSD, CFEngine

cbox/dbox cfengine update also full of fail

First I had saved libpromises.so.1, so that I could invoke cf-agent from /var/cfengine/bin to pull in the new cfengine-3.5.0 binaries and pull up the new inputs from my policy server.

Except I forgot to commit the 'bundle agent foo' kluge, and I had done an 'svn revert ...' to undo all the fiddling I had been doing on the policy server.

But, after I make the change, cbox/dbox refuse to copy up the new 'do-crontab.cf' file. I try running things verbose, 'cf-agent -v > out', but there's no out file??? &#58;&#63;&#63;&#58; Did I slip? Am I losing my mind?

Guess I should do it in another directory, because the update does purge...so its removing my 'out' file. &#58;&#111;&#111;&#112;&#115;&#58;

So, its saying this:

2013-06-15T14:40:55-0500  verbose: Entering directory '/var/cfengine/inputs'
2013-06-15T14:40:55-0500  verbose: Device change from 1242801830 to 843968349
2013-06-15T14:40:55-0500  verbose: Skipping '/var/cfengine/inputs/do-mysql.cf' on different device
2013-06-15T14:40:55-0500  verbose: Device change from 1242801830 to 843968349
2013-06-15T14:40:55-0500  verbose: Skipping '/var/cfengine/inputs/do-ddclient.cf' on different device

What are those device numbers....is it saying that because the file is remote that it won't copy it? After some looking around....I see that 'u_recurse("inf")' is:

body depth_search u_recurse(d) {
        depth   => "$(d)";
        xdev    => "true";
}

Which seems perfectly reasonable that I want it to recurse the destination directory and not cross devices. But apparently, it now means don't recurse into devices that different than the source directory's device???

I look around to see if I had done the 'xdev' line or if that was from where I based my initial setup from....and find that its what was giving over on Unix Heaven - http://www.unix-heaven.org/node/53#cf3-update

That change in behavior totally doesn't make any sense? But, enough hair pulling.... let's get things running again on cbox & dbox.

Hmmm, now its spewing warning messages:

2013-06-15T14:47:31-0500  warning: Setting replace-occurrences policy to 'first' is not convergent
2013-06-15T14:47:32-0500  warning: Setting replace-occurrences policy to 'first' is not convergent
2013-06-15T14:47:32-0500  warning: Setting replace-occurrences policy to 'first' is not convergent
2013-06-15T14:47:33-0500  warning: Setting replace-occurrences policy to 'first' is not convergent
2013-06-15T14:47:33-0500  warning: Setting replace-occurrences policy to 'first' is not convergent
2013-06-15T14:47:35-0500  warning: Setting replace-occurrences policy to 'first' is not convergent

I didn't before...and there's only one occurrence, so it is convergent. And, my use of 'replace_with => With("...")' for 'replace_patterns:' was lifted from -- https://cfengine.com/archive/manuals/cf3-solutions. So, in With(), I should change 'occurrences' from "first" to "all"....why have this attribute if its purpose is just to annoy now?

Its already annoying enough that -q is gone....but -K remains? If I'm running a cf-agent manual to speed up picking up a change....I'd like having to wait splaytime, but I don't want to ignore locks...because if I'm close to when cf-execd wants to run....it results in a big mess, especially for promises using single copy nirvana.... cf-agent doesn't distinguish on why the specific copy failed when there's a collision in this case, so it causes a less specific version to get copied....so that it has to fix it in the next run.

This causes problems for things like DNS and NTP. The specific is cbox is an NTP server that polls external servers in the freebsd.pool.ntp.org....and the generic NTP config is to use cbox and dbox as NTP servers....which has resulted in wild client oscillations, though both my dsl and cable connections have been rather unstable lately...though dsl will just have periodic drops, cable goes out until I reboot my cable modem....and the fact that using two NTP servers is probably the worst combination. But, I don't think I could get away with 4+ NTP servers on my home network. DNS....well, specific is cbox is primary authoritative, dbox is secondary authoritative, and generic is recursive caching resolver....similar to how I did DNS servers under cfengine at work.....

4+ justification - http://www.ntp.org/ntpfaq/NTP-s-algo-real.htm#Q-NTP-ALGO

NTP likes to estimate the errors of all clocks. Therefore all NTP servers return the time together with an estimate of the current error. When using multiple time servers, NTP also wants these servers to agree on some time, meaning there must be one error interval where the correct time must be.

Number of servers:

  1. Always trusted, even when its totally wrong
  2. If they differ, how do you break the tie?
  3. A server failing results in above.

So, 4+ servers for reliable accuracy.

Whether the total should be odd or some other sequence, is another debate.

http://www.ntp.org/ntpfaq/NTP-s-config-adv.htm#Q-SERVER-NUMBER

But three is a good place to start, and you can progress to three-groups-of-three if you feel the need.

three-groups-of-four? four-groups-of-four?

Hmmm, I don't know why I had never done 'cf-agent -v > out' in /var/cfengine/inputs on cbox/dbox before....but often do it on my policy host. I guess I don't normally need to be in /var/cfengine/inputs on cbox/dbox when I'm troubleshooting cfengine.

Hmmm, so the verbose run, I only see two places with warnings??? That seems odd.

cf3>     .........................................................
cf3>     Promise's handle:
cf3>     Promise made by: "name="tor""
cf3>     .........................................................
cf3>
cf3>  -> Looking at pattern name="tor"
cf3> WARNING! Setting replace-occurrences policy to "first" is not convergent
cf3>  -> Verifying replacement of "name="tor"" with "name="tor2"" (2)
cf3>  -> Replace first occurrence only (warning, this is not a convergent policy)
cf3>  -> Replaced pattern "name="tor"" in /usr/local/etc/rc.d/tor2
cf3>  -> << (2)"name="tor2""
cf3>  -> >> (2)"name="tor2""
cf3>  -> Replace first occurrence only (warning, this is not a convergent policy)

Wait....it makes even less sense.... this edit promise is about creating /usr/local/etc/rc.d/tor2 from /usr/local/etc/rc.d/tor. Since the starting file is fixed, the first occurrence is inherently fixed...so replacing it is convergent. So, its just there to annoy.

And, wait....its also having issues using recurse() from cfengine_stdlib.cf....well, it has xdev => "true"; set as well. And, that body is unchanged from the previous version.

Oh, I suppose I could remove splaytime.... its not like that I have a huge number of systems....

Wait... the tor2 edit_lines also has an if_elapsed("60"), so warnings are from other edits that were probably skipped. Guess, I'll have to risk a -K to find all the occurrences of the warning.

Though its strange that I'm only seeing the messages on dbox, even though the promise exists for both cbox & dbox....

OTOH, on cbox it did catch an error.... where it was promise that "/usr/local/www/apache22/data/tivo/.", etc. was accessible to the webserver Except I had typed "/usr/local/www/apache2/data/tivo/.", etc. There was no error/warning about skipping this promise before. Though in 3.5.0, it gave an error that it couldn't chdir to this directory...which delayed me into noticing the actual reason for the error.

And, once again...another wasted Saturday.... I didn't really need to go to the mall and spend money I don't have, I suppose. &#58;&#104;&#109;&#109;&#58;

Now before I can go back to just doing 'portmaster -a' to keep current....I still need to decide the fate for the databases/mysql55-server update.... I suppose while cfengine was down, I could've done the upgrade if the decisions was to stay put. But, still wondering if I want to try percona, or perhaps better yet, maria! I wonder what my friend Maria is up to these days....

  01:15:00 pm, by The Dreamer   , 694 words  
Categories: Software, FreeBSD, CFEngine

Meanwhile upgrading cfengine-3.4.4 to cfengine-3.5.0 not going well.

Upgrading the port was no problem....but it broke my cfengine. Why? The port puts the cfengine binaries in /usr/local/sbin, while the cfengine practice is that it has a private copy in /var/cfengine/bin. Which would be fine if the binaries didn't have shared library dependencies. Which they do, specifically libpromises.so.1 which is gone in cfengine-3.5.0...there's a libpromises.so.3.

Though before I discovered this problem, I first wanted to make some tweaks to update.cf so that I would have some indication that it had copied up new binaries from /usr/local/sbin to /var/cfengine/bin, since I noticed that files there newer than expected. Though I probably just rebuilt the same version port because a dependency had updated and /usr/ports/UPDATING indicated that I need to do that.

This probably is why at work, the person that setup our cfengine 2 went to extreme effort to create static cfengine executables...ignoring that such things are officially not supported on Solaris. Though we seemed to get away with running those executables, built on a Sol10u3 sun4u system...on systems more current up to Sol10u11, and a few Sol11 systems and systems that are sun4v architecture.

In a past life...we had run into a statically built executable (the installer) not working on our first UltraSPARC-III system (Sun v280r)...trying to recall what our build machine was back then.... my recollection says we only had the SPARCserver 20 and SPARCstation 10, before that. Though as I recall, we had to wait for a patch from Sun as well as rebuild the executable shared on the SPARCserver 20...to have it work. It wasn't long after that though that we retired support for sun4m, changing minimum requirements. Wonder if the application has become 64-bit yet? But, for ABI backwards compatibility claim to work, the executable needs to be built shared...so that it'll find the libraries provided on newer systems to allow older executables to still work.....

portmaster probably didn't know that it should save /usr/local/libexec/cfengine/libpromises.so.1, though would the old executables know how to find the library when its moved aside? (I do have SAVE_SHARED=wopt uncommented in my portmaster.rc file).

Occurs to me that I could just restore the file from backup, it would allow me to run

failsafe.cf

and get me to where everything should work again.

Though before I did that, I had invoked cf-promises (the one in my path -- /usr/local/sbin), and it complains about library.cf. Guess it doesn't like the old cfengine_stdlib.cf, the new one isn't where the old one was....it was here instead --> /usr/local/share/cfengine/CoreBase/libraries/cfengine_stdlib.cf I do a quick look at what's in it....mainly to make sure that bundles/bodies that I use are still there...and notice some interesting new ones....such as a package_method of freebsd_portmaster, someday I should look at cfengine3 to do port/package promising....

But first get cfengine working on policyhost, hopefully the other servers (at 3.4.4) are still working.....guess not, 3.4.4 doesn't like the 3.5.0 cfengine_stdlib.cf file. But, cf-promises is also not happy with some of my other promises....

Guess I'll update those while I get policyhost working again.

.
.
.

Or perhaps I need to revert....

root@zen:/var/cfengine/inputs 317# cf-agent
2013-06-15T13:22:53-0500    error: Bundle 'crontab' listed in the bundlesequence is not a defined bundle
2013-06-15T13:22:53-0500    error: Fatal CFEngine error: Errors in promise bundles
1.755u 0.113s 0:01.94 95.8%     172+2501k 133+12io 1pf+0w
root@zen:/var/cfengine/inputs 318# 
# cf-agent -v
...
2013-06-15T14:00:57-0500  verbose: Parsing file '/var/cfengine/inputs/do-crontab.cf'
...

Its there, why's it not working.... 'cf-agent -d' doesn't work, but it will only do failsafe....

Full story »

  09:12:00 am, by The Dreamer   , 390 words  
Categories: Software, FreeBSD

Perl update continued

For some reason I had cd'd into /usr/local/lib/perl5 on dbox and noticed that 5.16.2 was still present...well 5.12.4 was still on zen after I the upgrade to 5.14.2... and it just had whatis files. But, I went and looked inside, and found more than just whatis files.

Using 'pkg_info -W', I found that I had other ports that had installed perl modules that didn't start with 'p5-' or depend on 'libperl.so'.

So, off to rebuild those ports.

On dbox/cbox it was just databases/rrdtool, print/pdflib...plus some stray files left by already updated ports or removed ports. But, on zen there was a much bigger list of ports:

security/clusterssh
graphics/ImageMagick
japanese/p5-Jcode (which was missed, because the package name is ja-p5-Jcode-)
devel/perltidy
mail/razor-agents
security/clamtk
print/foomatic-db-engine
graphics/gscan2pdf
x11-clocks/intclock
databases/rrdtool
print/pdflib

Hmmm, probably need to update my i386 space, which is going to be wrong now...because uses the name make.conf of 'global', and I haven't updated it in a long time.... Not since May 4th. emulators/wine-devel has been updated since then, so I guess I'll have to tackle it sooner than later.... Especially, since I'm thinking of making another attempt to see if I can get other apps running in wine versus VirtualBox....

Full story »

06/14/13

  09:12:00 am, by The Dreamer   , 1412 words  
Categories: Healthcare, Quicken/TurboTax

Screwed by Quicken

A while back I was having trouble updating transactions in my TIAA-CREF account. I used to update my entering each transaction by hand every so often, but then a few years ago (when I rolled over a Rollover IRA, which I had parked my 401k of my previous employer, as the invisible money fee was huge....share counts would drop every month, with no explanation in the account statements.) I let them generate the funds I should spread my retirement into...which makes it much harder to be entering transactions by hand.

I had tried the Quicken download option, but the dates of the transactions didn't line up with my pay days or the website's transaction history. So, making the download match what I was entering was tedious, as was not entering any by hand and adjusting after download. Also, the download likes to splatter my register with placeholders, and the complain that the placeholders are missing information so it can't do gain calculations.

So, originally, I'd only like invest in 6 +/- 1 funds. Basically a fund out each slice and some multiple of 5% rather than the specific percentage an investment tool had suggested for me.

Now, I have my retirement funds spread of 12 funds in my Mandatory Plan, and 19 Funds in my Voluntary Plan (since the Voluntary has access to more choices than those specified by pension administrator.) The Mandatory Plan is funded by the mandatory 5.5% that comes out of every paycheck, plus an 8.5% match from employer. While the Voluntary Plan is money that came from other sources, which could be in the form of additional deductions from pay. But, in my case it represents what was in my previous 401k.

So, I just let the Quicken download be as it is....deleting most of the placeholder transactions, because the only transaction that doesn't appear anywhere is the share count growth of my TIAA-CREF Traditional Annuity. But, I just change them into reinvestment actions with a price of $1. Not sure how I would get quicken to tell me what the gain/loss % is from that....

Somebody had described how to do the math to get include re-investments into the overall gain, perhaps I'll have to look into that someday.

Anyways, I noticed that I somehow hadn't done a download in almost 2 months (since the download ranges are 30, 60, 90 or All), so I try to do it about every 30 days. Quicken doesn't seem too bright on knowing transactions that overlap the previous download aren't new, and it'll refuse to let me manually match them with the correct transaction, since it the transaction had already been matched (or created by a previous download). So, I have to delete some of the new transactions, along with all the placeholder entries...before accepting the rest.

Normally this works great....even if the dates happen a day or two before payday. It makes the transfer from my associated cash account for my Mandatory Plan...sure it might go negative...but it all zeros out in the end, usually.

But, then last fall, there was a weird extra $3 and change in my cash account. I kept looking for a missing transaction, but didn't see one. Eventually, I found that when it had done my annual birthday re-balancing...where it sells parts of some funds and Quicken transfers into my cash account, an then buys amounts in the other funds with Quicken transferring out of my cash account. It didn't do that when it added to my Wells Fargo Advantage Growth Fund Institutional. I fixed it by hand, somehow and continued on my way.

Full story »

  08:36:00 am, by The Dreamer   , 959 words  
Categories: Software, FreeBSD, CFEngine

Perl update + cfengine3 and subversion == great big mess on FreeBSD

So being up ridiculously early this morning, because I got up at the right time during the night...but forgot to take my second dose, and too late to take it when I woke again.

So, I thought I would tackle updating to the latest perl on zen. Choose not to just do a blind 'portmaster -r perl', since that would include anything that depends on being able to run perl or depends on something that depends on perl...possibly many levels down. Somebody should come up with a simple way to only rebuild the ports that directly depend on another port....

What I decided to do what after updating perl, was 'portmaster p5-*' then do a pkg_libchk to see what ports are missing libperl.so and redo those.

I did this first by logging into my machine at work (mew) and it went quickly and worked pretty cleanly. Though there are only half the number p5- ports, and less ports missing libperl.so as well.

So, back on zen...I updated perl and then started the 'portmaster p5-*', when it stopped. A port is dependent on /usr/local/bin/perl5.14.2, which is not found because I just updated to 5.14.4!

How do you have ports that are dependent on a specific version, especially since its possible to have a different version of perl. When I first installed zen, the default perl was 5.12.x...but I had elected to upgrade to 5.14.x when that became the new default. OTOH, when I setup cbox/dbox, I opted to go with 5.16.x...and at work, mew had started before zen, so it was also 5.12.x initially...but I choose to jump that up to 5.16.x (actually no .x at first, but it was very quickly followed by a .1...which inflicted a lot of pain again....though in this latest update they have switched to just major.minor for module path, which means no more pain until its time to upgrade to a newer minor release....

The first casualty was devel/p5-B-Keywords ...and I find its depended on by textproc/p5-Perl-Critic and textproc/p5-Test-Perl-Critic....with the last port being the leaf. So that seems to be that it was probably just a build depend for something I had installed long ago (since it doesn't sound like anything I could call in any of my own perl scripts.) So, I figure I'll just delete those packages and carry on.

Nope, the next port also fails with the same strange demand.

&#58;&#35;&#35;

After hunting around a bit, to try and figure out what is making this port think it depends on the previous version of perl...it dawns on me that the perl port updates /etc/make.conf as to what version of perl is installed.

And, I have

/etc/make.conf

being managed by cfengine3. &#58;&#111;&#111;&#112;&#115;&#58;

So, I go update the file, and try to svn commit the change....which blows up because the perl modules needed for the commit hook haven't been updated yet.

Well, guess I'll stop cfengine3 from reverting my /etc/make.conf (by disabling the promise from the root side) Though IIRC now, the commit hook is mainly to prevent root from doing commits into my subversion repository, instead of the put subversion on an NFS filesystem and have rootsquash prevent root from being able to write into the repository, that we do at work.

Phew....I can commit updated /etc/make.conf and let cfengine promise it. Perhaps in a future project, I'll see if there's some way to have cfengine set that dynamically or something.

Though should I have cfengine promising all of /etc/make.conf ? There's a block in /etc/make.conf that is the same across all my FreeBSD systems, since its the ports that I come across that don't like 'make -j#'. And, it was intended to have cfengine promise that part, though it there have been other additions that I want the same on all my FreeBSD systems, like the override on modules in net-snmp, that I mention when I ran into some cacti. Though there's value in having cfengine having the whole file....or rather its that the files are in subversion.

Next up...nagios has alerted me that spamassassin has stopped, yeah, I guess that would happen. Which means there's going to be a big chunk of spam in all my mailboxes (126...I need to find a good way to aggregate them back so that I can read them all ... from roundcube) module rebuilds are done, and spamassassin seems to be running again (probably because cfengine3 has a promise to keep for that) Though might work better if I do an 'sa-compile' to make sure that part is right...though seemed to me it was only major.minor....

Full story »

06/08/13

  08:42:00 pm, by The Dreamer   , 1176 words  
Categories: Software, Computer, Storage, FreeBSD, CFEngine

Another weekend seems to be slipping away on me....

And, its the same time suck....cacti.

Last weekend got away from me, because I to make another attempt to improve cacti performance. I had tried adding 3 more devices to it, and that sent it over the limit.

I tried the boost plugin....but it didn't help, and only made things more complicated and failure prone. Evidently, updating rrd files is not a constraint on my cacti server. Probably because of running on an SSD.

I made another stab at getting the percona monitoring scripts to actually work under script server, but that failed. I suspect the scripts aren't reentrant, because of their use of global variables and relying on 'exit' to cleanup things it allocates or opens.

I had blown some previous weekend when I had tried to build the most recent version of hiphop to maybe compile the scripts, but after all the work in figuring out how to compile the latest 2.0x version...it would SEGV, just as the older lang/hiphop-php did after resolving the problem of building with the current boost (a template had changed to need a static method, meaning old code won't link with newer boost libraries without a definition of this.) And, this is beyond what I have in my wheelhouse to try to fix.

During the week, I had come across some more articles on tuning FreeBSD, namely a discussion of kern.hz for desktop vs servers. Where it being 1000 by default is good for desktops, but the historical setting of 100 being what to use for servers. Though IIRC, ubuntu uses 250 HZ for desktops and 100 HZ for servers, it also doesn't do preemption in its server kernel along with other changes (wonder if some of those would apply to FreeBSD?) Though modern kernels have been moving to be tickless. Which I thought was in for FreeBSD 9, though the more correct term is dynamic tick mode...and which is more about not doing unnecessary work when things are idle. Which isn't the case with 'cbox'. So, perhaps, fiddling with kern.hz and other sysctls might still be relevant. Though haven't really found anything detailed/complete on what would apply to my situation.

So, I thought I would give kern.hz=100 a shot.

At first it seemed to make a difference....no improvement in how long to complete a poll, but the load was lower. Until I realized that a service had failed to start after reboot. I had only run the rc script by hand, I hadn't tested it in a reboot situation. And, its not an rc script....it was used to be a single line in rc.local that worked on ubuntu and FreeBSD (except on one of the Ubuntu systems it results in a ton of zombie processes, so making it an init.d script that I could call restart on happened.

So, I spent quite a lot of time reworking it into what will hopefully be an accept rc script. One thing I had changed was that instead of using a pipe ('|') which was causing the process after the pipe to respawn and turn the previous process into a zombie each time the log file was rotated and "tail -F" announced the switch. And, this was while I was moving the service to FreeBSD (and management under cfengine 3.)

Though looking at my cacti graphs later....while the service had failed to start after reboot, it turned out to have been running for sometime, until I had broken it completely in trying to rc-ify the init script. Will, duh....I had cfengine set to promise that the process was running, and it had repaired that it hadn't started after the reboot.

Another thing I had done with I had init-ified the startup of this service, was I switched from using pipe ('|') to using a fifo, which addressed the respawning and zombie problem and eliminated the original reason to have an init.d script....

While the init.d script had worked on FreeBSD...it was just starting the two processes with '&' on the end then exiting. FreeBSD's rc subroutines do a bit more than that. So things weren't working. The problem was that even though I was using daemon instead of '&', so that daemon would capture the pid and make a pidfile. seems daemon wants the process it manages to be fully working before it'll detach. But, the process is blocked until there's a sink on the other end of the fifo. (does sink fit was the name for the fifo's reader?) I first wonder if I could just flip the two around, but I suspect starting the read process first would be just as blocked until the write process is started. So, I cheated by doing a prestart of the writing process and only tracking the reading process.

Though it took a bit more work to get the 'status' action to work....eventually found I needed to define 'interpreter' since the reading process is a perl script. And, the check_pidfile does more than just check to see if there's a process at the pid, but that its the right process. And, it distinguishes between arg0 and the rest.

Pretty slick...guess I need to do a more thorough reading of the various FreeBSD handbooks, etc. Of course, it has been 13+ years between when I first played with FreeBSD to its take over of my life now.

As for the tuning....it had made a small difference, but no improvement on cacti system stats. Basically the load average fluctuates a bit more and the CPU utilization seems to be a bit lower...though it could because the 4 lines of the cacti graph aren't so close to each other now.

Meanwhile...I noticed that one of the block rules in my firewall had a much higher count than I would expect, so I think I was about to get logging configured to see what that's about.....(which I was working on when I remembered that I hadn't rebooted after making the kern.hz change to /boot/loader.conf yesterday...the commit also picked up files that I had touched while working on moving the one remaining application on 'box', though that may get delayed to another weekend....perhaps the 4 day one coming up.)

I had set cf-execd's schedule to be really infrequent (3 times an hour), because I was doing a lot of testing and cf-agent collisions are messy....messier than they were in cfengine 2 (in 2 it usually just failed to connect and aborted, in 3 it would keep trying and splatter bits and pieces everywhere....which is bad when there are parts using single copy nirvana. resulting in services getting less specific configs, until the next run.

But, I sort of brought back dynamic bundle sequences.... but key off of "from_cfexecd", so I can test my new promise with less problems of colliding with established promises. Though there are other areas where things still get messy.... need to clean up some of the promises I had based on how things were done at work, so that the promises are more standalone.

Kind of weird using my home cfengine 3 setup, and other admin activities, as the means to break the bad habits I had picked up at work....

  07:05:00 pm, by The Dreamer   , 2118 words  
Categories: Software, Computer, BOINC, FreeBSD

sqlite3 SECURE_DELETE and Firefox

So a few days ago, databases/sqlite3 was updated in ports. And, in the portmaster run, I was faced with its config dialog. Think I had gone with the defaults previously, but decided to take a closer look this time. Saw that SECURE_DELETE, with the description "Overwrite deleted information with zeros". That sounds like a waste of time, I should probably turn that off.

A quick online search, I found this:

The secure_delete setting causes deleted content to be overwritten with zeros. There is a small performance penalty for this since additional I/O must occur. On the other hand, secure_delete can prevent sensitive information from lingering in unused parts of the database file after it has allegedly been deleted.

Yup, definitely just a waste of time...even says so. The OTOH, wrong. Why? Because I'm running my FreeBSD system on ZFS, which is copy-on-write. Its just spinning my wheels create a new copy of the file filled with zeros, and the old file is just unlinked somewhere intact, and then unlinking that new copy that it had filled with zeros. When just unlinking the old file achieves the same thing faster.

Of course, what happens a little while later there's an update to www/firefox in ports, where the configure fails because sqlite3 wasn't built with SQLITE_SECURE_DELETE. Well, I'm not turning on stupid for Firefox...I'm already disappointed by how slow it has become (and PGO seems to be broken again), to where chrome/chromium is now my everywhere browser. Which is working on the most part now that I don't have a Solaris workstation as part of my everywhere.

Well, its just configure that is testing for it and complaining...so there should be a way to turn it off. Hmmm, no option to do that, guess I'll have to later the configure script. Do I inject a patch into the files directory? Looks like the file is being adjusted elsewhere, though I don't see a patch in files that is working on it. Okay, its the post-patch target in the Makefile. Can I just add to that? Guess the way to do it is to change AC_MSG_ERROR to something that doesn't terminate the configure. Unfortunately I have portmaster.rc opertion "PM_DEL_BUILD_ONLY=pm_dbo" uncommented, so can't quickly look what AC_MSG_??? I could use. Find some online documentation, that describes AC_MSG_CHECKING, AC_MSG_RESULT, AC_MSG_NOTICE, AC_MSG_ERROR, AC_MSG_FAILURE, AC_MSG_WARN...first 3 are messages that aren't emitted if '--quiet' or '--silent' options are used. I don't think those options are used normally, but seems like a good idea to me. I'll use AC_MSG_NOTICE (though now that think of it, AC_MSG_RESULT is probably valid, since it was an AC_MSG_CHECKING that comes before the AC_MSG_ERROR...)

Well, AC_MSG_NOTICE is undefined. Guess the autoconf being used is different than the one I found online. AC_MSG_ERROR and AC_MSG_FAILURE cause exits, but AC_MSG_WARN writes to stderr and continues. Guess, that's what I'll have to use then.

So, I insert the change, and create quick diff so that I can reapply it as a patch for next time....

Code

--- Makefile.orig  2013-06-03 17:45:05.000000000 -0500
+++ Makefile  2013-06-04 18:22:37.335175851 -0500
@@ -89,6 +89,7 @@
post-patch:
  @${REINPLACE_CMD} -e '/MOZPNG/s/=[0-9]*/=10511/' \
    -e '/^SQLITE_VERSION/s/=.*/=3.7.14.1/' \
+    -e '/with SQLITE_SECURE_DELETE/s/_ERROR/_WARN/' \
    ${WRKSRC}/configure.in
  @${REINPLACE_CMD} -e 's|%%LOCALBASE%%|${LOCALBASE}|g' \
    ${WRKSRC}/browser/app/nsBrowserApp.cpp

Pages: 1· 2

05/24/13

  08:20:00 pm, by The Dreamer   , 509 words  
Categories: Home Theatre, WiFi, Storage, Samsung UN50ES6500F, FreeBSD, VirtualBox

I got another HSTi Wireless Media Stick

Link: http://hsti.com/products/wirelessmediastick

I had acquired my first HSTI Wireless Media Stick back on April 24th, 2011 (from a marketplace seller on Amazon.com)...it took some time to arrive and I blogged about it on April 30th, 2011 -- Getting local content to show on my Roku XDS.

Now my Roku has moved to my other HD display (24" 1080p), but that was before my old Samsung HDTV (43" 720p) regenerated into a Samsung Smart 3DTV (50" 1080p).... so I'm back to living room TV being my main viewing device for all content, though TiVo has a box that I could connect to the smaller display to access my TiVo content there....which I may want to get at a later date. The majority of content I watch is from TiVo...one of these days I need to setup my blu-ray player so I can get back into watching DVDs (not sure when I'll have blu-ray discs, but need to get back on my netflix backlog).

But, the other day I had an mp4 file that I needed to play....and I thought I should get someway to do that to my 50" HD display... Had to settle with using the Roku for a bit. And, decided that the plan will be to acquire another HSTi Wireless Media Stick.

After searching around online, eventually found that ordering directly from HSTi was the only option now. So, I ordered another one on May 17th. It arrived yesterday. But, I didn't set it up until I got home from work today. Somehow I had forgotten again that HSTi is in Calgary, Alberta. Not that I'll be going up there in the immediate future....

Anyways, no big surprises...good thing I had solved my USB2.0 and Windows 7 in VirtualBox on FreeBSD problem (got a Silex SX-DS-4000U2). I'm sharing TARDIS from orac to it still, since I don't yet have a replica on zen yet (need to free up space). Though when I moved the HSTi Wireless Media Stick it had forgotten the share, so had to pull up web again and add it back. Interesting that its graphic shows the itself, while the graphic on my older stick is that of the original Wireless Media Stick (it used to be the correct graphic, but after an update it keeps showing the graphic of the older version.) Though this one came from the factory with the latest firmware, so who knows what'll happen when there's an update.

Was interesting using the SmartTV to view it, though wonder if it'll be a problem with it constantly discovering the stick every time I turn it on and presenting dialogs and such. Afterwards I tried the Amazon app to see if that was working yet....it was still saying I needed to update my TV, though this time there was an update....and now that works. Which might make it interesting to decide on what I should do. The only problem with using the SmartTV versus some other viewer....the TV is only 2.1 audio while other routes I can get 5.1, and its a different input on my receiver....

Oh well, back to other projects....

04/28/13

  10:06:00 pm, by The Dreamer   , 989 words  
Categories: Hardware, Software, Computer, Ubuntu, FreeBSD, CFEngine

Took a diversion from cacti and now its nagios

So, doing cacti on cbox doesn't seem to be working long term... but, the moment is being prepared for....I starting to assemble the pieces to build a new machine to do this and handle some other tasks that I've been looking for a place for.

Back to cfengine, I added a promise for dnetc (distributed.net)....and then a promise to finally configure CUPS on the two servers. And, then I turned to nagios.

I spent a couple evenings creating the initial configuration of nagios, working in design changes that I wanted to make and initial monitoring of localhost (dbox). Though it wasn't straight forward....there were differences here and there....mostly in FreeBSD layout, paths, and some of the commands taking different options. But, eventually I got everything running. My old check_dyndns worked once, but then stopped working.... problem was that it did 'stat -c "%Y" ..." which doesn't work on FreeBSD, 'stat -f "%m" ...' was the adjustment for that. All, while all the checks_* seem to be there, command definitions was lacking....but I guess having command definitions for everything is part of the debian/ubuntu packaging. There were other frills that came with that, that I don't mind not having...

I did run into check_ntp being deprecated....with check_ntp_time and check_ntp_peer being the tests to use....separating and making more clear on whether you're comparing time between servers using ntp or checking the state of the ntp server...
It did show some interesting oddities in holding NTP time on my home network.... I know that I should have 3 or more ntp servers, but it seems that I'm often landing in the state where I only have 2....with lots of delay, resulting in pretty good swings of jitter....almost makes me wonder if this something I could graph in cacti.... &#58;&#104;&#109;&#109;&#58;

Wonder if I can find a cheap NTP appliance somewhere....

The last stumbling block was check_dhcp. Which seems to be broken on FreeBSD. All, the discussion on it seemed to point to firewalls, but no firewalls and it still didn't work....tcpdump on both places, and its saying it sending stuff, but no packets appearing on the network. But, I can see the other DHCP traffic on the network.

I remove that check and call it a night. I mull some possible work arounds....first one I tried was setting up linux compability and try running the check_dhcp from my working (ubuntu) nagios. Well, it didn't work...it couldn't find an interface. Oh well, guess there's the ugly way....use nrpe to invoke it. Though that didn't work right away.....probably because while I had created new nrpe configs for all my servers in cfengine, I haven't put any of my ubuntu servers under cfengine yet. Most of the other promises haven't been implemented for ubuntu yet. It was pretty simple to include nrpe.cfg for everything.... in fact it condensed to only 3 files.... a freebsd version, an ubuntu version and a host specific version for orac. Well, not right away...that happened more recently...while I was going through and updating the nrpe.cfg's by hand on the ubuntu servers. Was when I noticed that some of the files were only different in comments....so I made further simplifications in cfengine...which'll propagate out eventually....

Long term, I'll probably just have to track down some alternate implementation of check_dhcp....

I then add cbox to monitoring...and then looked to see about monitoring things that are on cbox/dbox...so I found checks for freeradius, cups, squid, along with improvements to checks on ntp. The check_squid was tricky....I got it working by hand, after making the suggested change for the default Cache type parsing, which turned out to be changes for squid3 vs. squid2 (but box is still running squid 2.7 - since I had re-built it by hand with SSL support, and blocked ubuntu from updating it. Orac wasn't blocked so it eventually turned into squid3.

it worked by hand, but wouldn't work under nagios...turned out that the embedded perl wasn't liking it. I was going to disable embedded perl for it, when I took a look at seeing what it was complaining about. And, did some reading on embedded perl.... the gist was "use strict", "perl -w" and "perl -c" as starting points. perl -w was find, but perl -c had one problem....which I fixed. But, no go. And, then noticed the line "# todo : use strict", guess I'll have to deal with that.

And, making that all happy, got it working.

The only other quirk was the memory check wouldn't work on FreeBSD, I guess there's no mallinfo() available for that. So, no running that test on those servers....plus no Cache test on box. But, it still left enough variety of tests that worked on all. And, it wasn't so much that I wanted to get all the information, but I choose to define all the different tests with ports set into the test....so running the check would also test that all my squid ports worked. There's actually only two that matter, but I have all my squid's configured the same, listening on 5 or 7 ports....depending on whether I have SSL enabled. Though I pretty much only need two now. I'm not doing transparent proxying and I don't need the SSL now that I've split box into dbox/cbox....the SSL was so ddclient could work on box and update dyndns via proxy to DSL....

Next up is adding zen to nagios, and coming with with more tests of things that are specific to zen, but covered or not covered in the old nagios.

Though as I worked along...there were things I couldn't find monitors for...though I realized that I could have cfengine promise that those services were running. Plus cfengine was also taking care of other things. So, I should probably work on writing some promises for zen. So, I can have promises to make sure things are started up again after a port is updated or that php/extensions.ini is reordered, etc.

But, I'll probably continue adding everything else to nagios first.

1 2 4 6 7 8 9 10 11 ... 141

Now instead of subjecting some poor random forum to a long rambling thought, I will try to consolidate those things into this blog where they can be more easily ignored profess to be collected thoughts from my mind.

Latest Poopli Updaters -- http://lkc.me/poop

bloglovin

There are 20 years 7 months 22 days 6 hours 54 minutes and 6 seconds until the end of time.
And, it has been 4 years 5 months 6 days 7 hours 8 minutes and 50 seconds since The Doctor saved us all from the end of the World!

Search

May 2017
Mon Tue Wed Thu Fri Sat Sun
 << <   > >>
1 2 3 4 5 6 7
8 9 10 11 12 13 14
15 16 17 18 19 20 21
22 23 24 25 26 27 28
29 30 31        
Google

Linkblog

  XML Feeds

Who's Online?

  • Guest Users: 2
This seal is issued to lawrencechen.net by StopTheHacker Inc.
blog tool

hosted by
Green Web Hosting! This site hosted by DreamHost.

monitored by
Monitored by eXternalTest
SiteUptime Web Site Monitoring Service
website uptime