Categories: "Software" or "b2evolution" or "BOINC" or "CFEngine" or "Quicken/TurboTax"

Pages: 2 4 5 6 7 8 9 10 11 ... 31

07/06/13

  09:32:00 am, by The Dreamer   , 659 words  
Categories: Software, Ubuntu, FreeBSD

Ubuntu squid with SSL

Link: http://lawrencechen.net/ddclient-aamp-squid

This is an update to the "ddclient & squid" here

Ran into a new problem recently....though the need for SSL in squid on ubuntu is deprecated, by the fact that I'm slowly replacing this server with a FreeBSD server.

As a result, I don't pay attention to this ubuntu server as much as I used to, so I've configured unattended-upgrade. It was installed, but it didn't seem to do anything in that on other servers I'd log in to find that there are lots (40+) of patches available and more than half that are security. Since I came across how to configure it to do more than just security patches, including send me email and on some systems automatically reboot when necessary. (should've thought to see how unattended-upgrade is configured and doing such things in the Ubuntu AMI I have in AWS)

Since I got unattended-upgrade configured on this old server (32-bit Ubuntu Server, which I've heard they have a 12.04LTS download for??? They had said they dropped 32-bit server support, so there was version with 10.04LTS. So I couldn't upgrade and now I'm way past EOL, which is causing problems...probably need to hunt down the landscape and ubuntuone services and nuke them, instead of letting them degrade my server for being EOL.) I've also had to update packages on here from outside sources to keep things running, so guess I should work harder on abandoning this server.... Where it'll likely get reborn as [yet ]a[nother] FreeBSD server....along with the server that I think I have all the parts collected for it, but just need to sit down and put it together. It started as a mostly function pulled 1U server, in need of ... well either new fans or a new case.... I opted for the new case route. It also needed drives and memory. But, as a result of the new case route...aside from case/powersupply...it meant I would need to get heatsinks...since the passive ones based on the 1U case channeling air flow....would be hard to recreate in the tower case I went with. Its a huge tower case, given that its an E-ATX motherboard...yet it isn't a full tower (like the formerly windows machine called TARDIS...someday I'll work its regeneration....need money to buy all the bits and pieces that'll make that up, which I haven't fully worked out what those will be....or where it'll go since my dual 23" widescreen FreeBSD desktop has consumed all of the desk that it would've shared....and not really keen on the idea of a KVM for this situation. :hmm: )

Anyways...every day I get an email from unattended-upgrade for this system.... with:

Unattended upgrade returned: True

Packages that are upgraded:
 squid-common 
Packages with upgradable origin but kept back:
 squid squid-cgi 

Package installation log:


Unattended-upgrades log:
Initial blacklisted packages: 
Starting unattended upgrades script
Allowed origins are: ["['Ubuntu', 'heron-security']", "['Ubuntu', 'heron-updates']"]
package 'squid' upgradable but fails to be marked for upgrade (E:Unable to correct problems, you have held broken packages.)
Packages that are upgraded: squid-common
Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg_2013-07-06_08:05:42.056193.log'
All upgrades installed

This is because of that quirk where even though I rebuilt my version with SSL, and kept it the same version...it wants to install its version to replace mine (of the same version). Which is why I did the hold thing.

I could do the alternative of add a string to make my version advance from current....though I suppose I won't unhold...so that unattended-upgrade won't upgrade should such a thing appear (unlikely since both the OS and squid are ancient...and there'll be no more updates.) But, the intent is to hopefully silence unattended-upgrade in this matter.

Though kind of surprised its still doing something....hmmm, guess there was a new security patch to squid 2.7 back on January 29, 2013....that I've been missing (suppose its already downloaded the update in its 'cache'....or the backend is still there, its just not getting updates beyond what's there....whatever, I think I'm down to one more service to move off....)

07/04/13

  05:59:00 pm, by The Dreamer   , 476 words  
Categories: Software, CFEngine

Upgrading from CFEngine 2 to CFEngine 3

I just learned of a key missing detail that would probably have helped lots of other CFEngine 2 sites make the transition to becoming CFEngine 3 sites.

All the sites, include CFEngine's have docs about Upgrading from CFEngine 2 to 3....

Where, the touch, or go in-depth, on conversion of policies from 2 to 3, extol how 3 is better than 2, and then offer vague options on how to upgrade (either in-place or replace)....

The most detailed explanation was a slide deck...which wasn't detailed enough.... that says "CF2 and CF3 designed to be interoperable", "Replace CF2 policies at your pace". How?

In-Place Upgrade

"Replace cfexecd with CFEngine 3's cf-execd" - Access controls remains untouched, runs cf-agent.

"Sample input files contain integration promises" - Launched automatically, Changes crontab

And, then get's in the steps:

  • Install CFEngine 3
  • Copy new inputs files to CF2 master repository
  • Remove any rules to reinstall CF2 or add cfexecd or cfagent to crontabs
  • Remove cfexecd from start up
  • Edit update.cf
  • set email options for executor in promises.cf
  • cf-agent --bootstrap

If all went well, you are now running CFEngine 3.

Bootstrap policy server using:

cf-agent --bootstrap --policy-server

  • Remove all rules and policies that are capable of activating CFEngine 2 components
  • Convert cfservd.conf into a server bundle
  • Place a reference to this in promises.cf
  • Add converted CFEngine 2 policies or create new CFEngine 3 policies
  • Done???? :??:

    Somethings missing....where's this interoperability taking place? Does CF3 know how to run CF2 policies? no... where's this replace CF2 with CF3 at my pace? Reads like its a full in-pace replacement of CF2 to CF3....

    So I finally made a reference about this on a list...

    Answer?!

    It's why the CF3 binaries have dashes in the name. So you can drop them into the CF2 working directory.... The trick is editing the exec_command in the executor configuration, that's the command for running the agent; modify it to run both agents (v2 and v3).

    Wow...that's kind of an important detail that's been missing!

    Full story »

      02:58:00 pm, by The Dreamer   , 254 words  
    Categories: Software, FreeBSD, CFEngine

    hindsight on cfengine 3

    In retrospect, maybe what I should've done is switched the origin of my sysutil/cfengine to sysutil/cfengine34 when 3.5.0 came out. Since, I see that cfengine-3.4.5 has recently come out, bug fixes to cfengine-3.4.4 were more of what I was after than new features. Though I am intrigued by what 3.5.0 appears to bring, and am considering making use of it...of course, by the time I get to it 3.5.1 or newer might be out.

    OTOH, do I really want to build cfengine-3.4.5 in semi-usable package management system we use at work for building and maintaining packages for Solaris 9 and Solaris 10 SPARC, and Solaris 10 x64. The system builds everything 32-bit, though I'm pretty sure we don't have 32-bit hardware anywhere in the datacenter anymore. Though we still have a few Solaris 10 systems around.

    Hmmm....

    % wget 'https://www.cfengine.com/source-code/download?file=cfengine-3.4.5.tar.gz'
    --2013-07-04 08:39:34--  https://www.cfengine.com/source-code/download?file=cfengine-3.4.5.tar.gz
    Resolving www.cfengine.com (www.cfengine.com)... 62.109.39.150
    Connecting to www.cfengine.com (www.cfengine.com)|62.109.39.150|:443... connected.
    OpenSSL: error:14077458:SSL routines:SSL23_GET_SERVER_HELLO:reason(1112)
    Unable to establish SSL connection.
    

    :hmm:

    Seems to be a problem with a client using openssl 0.9.8 talking to a webserver using 1.0.0?

    Guess there's a patch submitted against 0.9.8y.... http://www.mail-archive.com/openssl-dev@openssl.org/msg32486.html

    But, this will be a big mess at work....nothing is using 0.9.8y yet (though I've been meaning to build it so I'll be ready when there's a bind-9.9.3-P2...had started building 9.9.3 when there was a security advisory of problem introduced in that version...so I'm waiting for the next 'real' security patch to do the upgrade...though maybe I shouldn't, since the intent is for this to be the first 64-bit build....)

    Not sure what I'm going to do about cfengine3 at work though....

    06/29/13

      05:53:00 pm, by The Dreamer   , 1493 words  
    Categories: Hardware, BOINC, Operating Systems

    Another Radioactive@Home sensor in Manhattan, KS

    So, there's this BOINC project out of Poland called Radioactive@Home, where you have a radiation detector hooked up to a computer taking samples, etc. Its my second BOINC project with a hardware sensor. Though I had signed up for this one first...back on June 16, 2011. QuakeCatcherNetwork had come later, but getting a sensor was quick (though there were delays in getting it working, they had switched to a new sensor where they didn't have Linux drivers yet...etc., etc.) But, doing Radioactive@Home took longer as sensors are built in batches, there had been early batches that I missed and I wasn't all that sure at first if I really wanted do go to the hassle of getting one.

    But, then another user announced that he would do a group purchase of 50 or so, which it should cut shipping costs quite a bit by having a cheaper large shipment from Poland, plus domestic delivery for the last leg. The way delivery costs go, you can get up to 3 for the delivery charge...though most people only want one....at least initially.

    Basically I ordered my first detector around August 2011, and finally received it in March 2012. And, it just runs...though occasionally I'll look to see if anything interesting is recorded (like the interesting trace for around the end of the world....)

    Meanwhile, on June 26, 2012 there was an announcement of a new detector...a pretty looking one. My first sensor was a prototype type case with rough cutouts, etc. Not really bad looking, but still plain and crude looking. While the announced sensor looked neat, the kind of thing that I might considering putting on my desk at work....

    So, there was basically an announcement that there wasn't going to be another bulk US purchase...so after some thought, I decided this new detector was just too pretty to pass up. So, I ordered one mid to late July, 2012. Got confirmation on July 23rd, 2012. 27 Euros for the detector plus 10 Euros for up to 3 detectors, more than 3 pay for the detector now, get bill for actually shipping cost later. Plus if I use PayPal to specify that I'll pay the transaction fees....

    In the previous order, it had been requested that we have PayPal funds to pay for the transaction....or use a check. I had tried to keep a float of cash in my PayPal account....but when it finally came time to pay, there wasn't quite enough to do that, so I opted to just mail a check. For this second order, I went with PayPal and had PayPal add the transaction charges to my total.

    First detector cost me $46.25 by personal check. Second detector cost me $47.36 (and conversion and including the transfer charge).... I sent the PayPal money on August 21, 2012.

    And, then it was wait and wait and wait. I would check the boards now and then for updates...but it was mostly other people wondering the same thing.

    Eventually, I stopped checking in...and kind of forgot all about the sensor. Though I did visit the site briefly, but didn't linger or read the detector threads...which I went to check what platforms the project supported. Because when I had originally ordered, I was down to a Solaris 10/x64 workstation, a Windows box, a first gen MacBookPro (32-bit Core Duo). and a dead Linux machine. Eventually, I got a computer to replace the dead Linux box...but I went with FreeBSD instead, and it eventually displaced the Solaris workstation. In February, 2013 while I was working late on my FreeBSD system, I saw the Windows box update itself and reboot, and then it failed to boot. It had killed itself....pretty much the same way my home Windows box had killed itself in an auto-update in February, 2012. I left it off, not sure what I would do with it....I thought about OmniOS or SmartOS...though it was a first gen i7, so no EPT for KVM. Eventually, I decided to install Ubuntu 12.04LTS on it....where its mainly backup for when my FreeBSD system crashes.... its one thing that new Seagate drives only have 1 year warranties...its another thing that they seem to have trouble lasting that long.....

    And, then an iMac 27" appeared on my desk....back when it seemed bleak on getting FreeBSD working as my main workstation....I was talked into getting one. But, FreeBSD remains my main workstation....while there are somethings that the iMac is the only computer I have where things work (like being able to participate in WebEx, Lync, Google Hangouts or Xoom for web conferencing....plus it finally solves having mail staying open while I switch to the appropriate desktop to do whatever....I'm up to 17 now....where there are typically 4 to 12 windows...either of uniform size, or variable size, and some desktops the windows overlap, though that desktop is mainly for tailing logs.... Where I'm up to 2 full desktops and 2 half desktops for that.... Anyways, I had made a quick visit...because I wondered if Mac OS X was a supported platform (it wasn't) or if anybody was using FreeBSD for this project....didn't get any search hits. And, it seemed unlikely that the hardware part would work through the Linux emulation on FreeBSD (especially the Fedora 10, and I'm not sure what the process for converting to the CentOS 6 is, that wouldn't break all the things I'm using Linux emulation for....though it is mostly other BOINC projects.) Though doing the search now, I see that a couple days ago the question got raised....with not much luck on having it find the detector ... but ending with a link to a FreeBSD version of the application.... Though since I have a Linux system at my desk (where is primary purpose is to run VBoxHeadless containing Windows 7, for those occasions where I need to use vSphere Center...and passing the time doing BOINC)...I'll just go with running new detector should it ever arrive...on that.

    Full story »

    06/24/13

      08:57:00 am, by The Dreamer   , 838 words  
    Categories: Home Theatre, Software, Momitsu V880N, FreeBSD, CFEngine

    This just in, cfengine developers don't test or use cfengine!?

    So, this morning I was was wonder why my nagios was still warning about something that it shouldn't be. I was positive I had changed the warning threshold above where it was. I do an 'svn status' on my work dir, nothing uncommitted. I do an 'svn up' on the cfengine server....no updates, I drill down to the file and its correct (perhaps I need an alias on this side as well...though I usually only use 'cdn' for where my svn work dir is or on the nagios server....though its because at work....where this alias is used in association with nagios as well (where work nagios is not yet managed by cfengine, but was considering it for the new nagios server that I'm trying to set up between fires and stuff at work....except the fact that we're still running cfengine2 is really starting to become a problem......though I wonder if cfengine2 could do it, if it weren't hampered by how former admin had implemented things....The work cfengine made a mess with using it to setup a new system because of weird cross interactions between 'promises' and that the promise wasn't written in the same sequence it was running, things that probably aren't a problem when cfengine was original deployed to promise that nothing ever changes....)

    Anyways....I finally hunt through the -v output... which is now not much different than debug noise, and nothing like what verbose used to be in 3.4.4.....no more search for 'E nagios' to find where the start of "BUNDLE nagios" is in the out, and then finding the specific file promise..... what a mess. Its like they don't want you to know what's going wrong....

    Turns out I missed some more uses of 'recurse' from cfegine_stdlib.cf, where xdev=true is busted.

    It was one of three bugs that I had logged for cfengine 3....#2983. Which was almost immediately flagged as a duplicate of #2965 (3.5.0rc fails to recursively copy files with strange message)...and this morning at 5:03am, my bug was closed as that it indeed seems to be fixed for 3.5.1 (soon...).

    Wonder what the definition of soon is....had a previous problem where cfengine was complaining about bad regex....when the default for insert_lines: is that they are 'literal' strings. Which was making it hard to use cfengine 3.4.x to make edits to my crontab files. After putting up with it for a couple of months, I finally visit the bug tracker and find that its already been reported and fixed for next version. But, months and months go by and no new version appears. Though it does seem to be fixed in 3.5.0.

    Anyways reading #2965 was interesting.... aside from where the dev? spots another bug in the same code and has that pulled as part of the bug. Also that it was reported against RC, and made it into release. Though I had reported a bug in against an ubuntu 12.04 beta release....and it persisted into the release version, where they debated fixing it because apparently LTS means don't update anything after its release...(though I thought they had said things like firefox would stay current instead of staying fixed at the version at time of release now...) Plus it seemed I had to keep reminding them that my bug was reported before release, so that should be reason enough to release the fix. I'm pretty sure they did, but I hardly use that ubuntu desktop anymore (or any ubuntu desktop....though I did fire up my laptop yesterday, but its because there was a new VirtualBox and I hadn't updated the XP VM on there in quite some time....though I've been thinking of whether a FreeBSD laptop is feasible.)

    Someone asks that they have a unit test for this bug. Where the response is a unit test would need a running server, which they don't have (yet)...how long has cfengine been around for them to not be using it? Sure wouldn't want to be somebody who's paying for this.

    So does that mean nothing is being tested, and that nobody involved in development use cfengine? Because this was the kind of bug that pretty much anybody that uses cfengine3 would run into. Considering I only have the 3 systems (zen - policyserver, cbox, dbox) at the moment....

    Perhaps I'm jaded by having worked for an Enterprise software company and how we did full builds every week, and with full runs of automated and manual QA testing. And, having to create unit tests for less than trivial bugs as part of fix/review before closure process. Though what I'm hearing about Chef...its worse....

    Still haven't decided what I'm going to do with my Linux systems....migrating the files from Orac if I were to turn it into FreeBSD is the stumbling block, plus I would lose certain services...some of which might not really be an issue, since its probably time I make the leap to blu-ray. And, either I get another Roku or figure out how to incorporate the smart side of my TV into my life (probably time to finally upgrade my receiver....purchased October 27th, 1999)....

    06/21/13

      10:18:00 am, by The Dreamer   , 181 words  
    Categories: Software

    Upgraded tardisi.com to zenphoto-1.4.4.8

    What a pain that upgrade was....it kept change files and directories to really weird permissions. Like setting perms on index.php to 0, or themes directory and the themes/default directory to 311 (d-wx--x--x), and changing the parameter for CHMOD_VALUE to an $conf['CHMOD'] which is undefined...from its previous value of 0755. There were other places it had changed to perms to 0, etc. It was also complaining that it wanted log files to be 0600, but they are set to 0600...except for debug.log which it had changed to 0.

    Eventually, I cleaned up the mess and as soon as I got the setup to show the go button, I clicked....ignoring the other warnings...such as mysql with hosting provider meets minimum requirements but not preferred. or that it thinks files already set to strict perms is wrong and that I should set them to strict (which was part of why things were being changed to no perms at all....)

    Things seem to be working again.

    I think I should get back to hunting down photos and filling in the empty albums, and see if other photo sets should go up....

    06/15/13

      04:04:00 pm, by The Dreamer   , 1123 words  
    Categories: Software, FreeBSD, CFEngine

    cbox/dbox cfengine update also full of fail

    First I had saved libpromises.so.1, so that I could invoke cf-agent from /var/cfengine/bin to pull in the new cfengine-3.5.0 binaries and pull up the new inputs from my policy server.

    Except I forgot to commit the 'bundle agent foo' kluge, and I had done an 'svn revert ...' to undo all the fiddling I had been doing on the policy server.

    But, after I make the change, cbox/dbox refuse to copy up the new 'do-crontab.cf' file. I try running things verbose, 'cf-agent -v > out', but there's no out file??? :??: Did I slip? Am I losing my mind?

    Guess I should do it in another directory, because the update does purge...so its removing my 'out' file. :oops:

    So, its saying this:

    2013-06-15T14:40:55-0500  verbose: Entering directory '/var/cfengine/inputs'
    2013-06-15T14:40:55-0500  verbose: Device change from 1242801830 to 843968349
    2013-06-15T14:40:55-0500  verbose: Skipping '/var/cfengine/inputs/do-mysql.cf' on different device
    2013-06-15T14:40:55-0500  verbose: Device change from 1242801830 to 843968349
    2013-06-15T14:40:55-0500  verbose: Skipping '/var/cfengine/inputs/do-ddclient.cf' on different device
    

    What are those device numbers....is it saying that because the file is remote that it won't copy it? After some looking around....I see that 'u_recurse("inf")' is:

    body depth_search u_recurse(d) {
            depth   => "$(d)";
            xdev    => "true";
    }

    Which seems perfectly reasonable that I want it to recurse the destination directory and not cross devices. But apparently, it now means don't recurse into devices that different than the source directory's device???

    I look around to see if I had done the 'xdev' line or if that was from where I based my initial setup from....and find that its what was giving over on Unix Heaven - http://www.unix-heaven.org/node/53#cf3-update

    That change in behavior totally doesn't make any sense? But, enough hair pulling.... let's get things running again on cbox & dbox.

    Hmmm, now its spewing warning messages:

    2013-06-15T14:47:31-0500  warning: Setting replace-occurrences policy to 'first' is not convergent
    2013-06-15T14:47:32-0500  warning: Setting replace-occurrences policy to 'first' is not convergent
    2013-06-15T14:47:32-0500  warning: Setting replace-occurrences policy to 'first' is not convergent
    2013-06-15T14:47:33-0500  warning: Setting replace-occurrences policy to 'first' is not convergent
    2013-06-15T14:47:33-0500  warning: Setting replace-occurrences policy to 'first' is not convergent
    2013-06-15T14:47:35-0500  warning: Setting replace-occurrences policy to 'first' is not convergent
    

    I didn't before...and there's only one occurrence, so it is convergent. And, my use of 'replace_with => With("...")' for 'replace_patterns:' was lifted from -- https://cfengine.com/archive/manuals/cf3-solutions. So, in With(), I should change 'occurrences' from "first" to "all"....why have this attribute if its purpose is just to annoy now?

    Its already annoying enough that -q is gone....but -K remains? If I'm running a cf-agent manual to speed up picking up a change....I'd like having to wait splaytime, but I don't want to ignore locks...because if I'm close to when cf-execd wants to run....it results in a big mess, especially for promises using single copy nirvana.... cf-agent doesn't distinguish on why the specific copy failed when there's a collision in this case, so it causes a less specific version to get copied....so that it has to fix it in the next run.

    This causes problems for things like DNS and NTP. The specific is cbox is an NTP server that polls external servers in the freebsd.pool.ntp.org....and the generic NTP config is to use cbox and dbox as NTP servers....which has resulted in wild client oscillations, though both my dsl and cable connections have been rather unstable lately...though dsl will just have periodic drops, cable goes out until I reboot my cable modem....and the fact that using two NTP servers is probably the worst combination. But, I don't think I could get away with 4+ NTP servers on my home network. DNS....well, specific is cbox is primary authoritative, dbox is secondary authoritative, and generic is recursive caching resolver....similar to how I did DNS servers under cfengine at work.....

    4+ justification - http://www.ntp.org/ntpfaq/NTP-s-algo-real.htm#Q-NTP-ALGO

    NTP likes to estimate the errors of all clocks. Therefore all NTP servers return the time together with an estimate of the current error. When using multiple time servers, NTP also wants these servers to agree on some time, meaning there must be one error interval where the correct time must be.

    Number of servers:

    1. Always trusted, even when its totally wrong
    2. If they differ, how do you break the tie?
    3. A server failing results in above.

    So, 4+ servers for reliable accuracy.

    Whether the total should be odd or some other sequence, is another debate.

    http://www.ntp.org/ntpfaq/NTP-s-config-adv.htm#Q-SERVER-NUMBER

    But three is a good place to start, and you can progress to three-groups-of-three if you feel the need.

    three-groups-of-four? four-groups-of-four?

    Hmmm, I don't know why I had never done 'cf-agent -v > out' in /var/cfengine/inputs on cbox/dbox before....but often do it on my policy host. I guess I don't normally need to be in /var/cfengine/inputs on cbox/dbox when I'm troubleshooting cfengine.

    Hmmm, so the verbose run, I only see two places with warnings??? That seems odd.

    cf3>     .........................................................
    cf3>     Promise's handle:
    cf3>     Promise made by: "name="tor""
    cf3>     .........................................................
    cf3>
    cf3>  -> Looking at pattern name="tor"
    cf3> WARNING! Setting replace-occurrences policy to "first" is not convergent
    cf3>  -> Verifying replacement of "name="tor"" with "name="tor2"" (2)
    cf3>  -> Replace first occurrence only (warning, this is not a convergent policy)
    cf3>  -> Replaced pattern "name="tor"" in /usr/local/etc/rc.d/tor2
    cf3>  -> << (2)"name="tor2""
    cf3>  -> >> (2)"name="tor2""
    cf3>  -> Replace first occurrence only (warning, this is not a convergent policy)
    

    Wait....it makes even less sense.... this edit promise is about creating /usr/local/etc/rc.d/tor2 from /usr/local/etc/rc.d/tor. Since the starting file is fixed, the first occurrence is inherently fixed...so replacing it is convergent. So, its just there to annoy.

    And, wait....its also having issues using recurse() from cfengine_stdlib.cf....well, it has xdev => "true"; set as well. And, that body is unchanged from the previous version.

    Oh, I suppose I could remove splaytime.... its not like that I have a huge number of systems....

    Wait... the tor2 edit_lines also has an if_elapsed("60"), so warnings are from other edits that were probably skipped. Guess, I'll have to risk a -K to find all the occurrences of the warning.

    Though its strange that I'm only seeing the messages on dbox, even though the promise exists for both cbox & dbox....

    OTOH, on cbox it did catch an error.... where it was promise that "/usr/local/www/apache22/data/tivo/.", etc. was accessible to the webserver Except I had typed "/usr/local/www/apache2/data/tivo/.", etc. There was no error/warning about skipping this promise before. Though in 3.5.0, it gave an error that it couldn't chdir to this directory...which delayed me into noticing the actual reason for the error.

    And, once again...another wasted Saturday.... I didn't really need to go to the mall and spend money I don't have, I suppose. &#58;&#104;&#109;&#109;&#58;

    Now before I can go back to just doing 'portmaster -a' to keep current....I still need to decide the fate for the databases/mysql55-server update.... I suppose while cfengine was down, I could've done the upgrade if the decisions was to stay put. But, still wondering if I want to try percona, or perhaps better yet, maria! I wonder what my friend Maria is up to these days....

      01:15:00 pm, by The Dreamer   , 694 words  
    Categories: Software, FreeBSD, CFEngine

    Meanwhile upgrading cfengine-3.4.4 to cfengine-3.5.0 not going well.

    Upgrading the port was no problem....but it broke my cfengine. Why? The port puts the cfengine binaries in /usr/local/sbin, while the cfengine practice is that it has a private copy in /var/cfengine/bin. Which would be fine if the binaries didn't have shared library dependencies. Which they do, specifically libpromises.so.1 which is gone in cfengine-3.5.0...there's a libpromises.so.3.

    Though before I discovered this problem, I first wanted to make some tweaks to update.cf so that I would have some indication that it had copied up new binaries from /usr/local/sbin to /var/cfengine/bin, since I noticed that files there newer than expected. Though I probably just rebuilt the same version port because a dependency had updated and /usr/ports/UPDATING indicated that I need to do that.

    This probably is why at work, the person that setup our cfengine 2 went to extreme effort to create static cfengine executables...ignoring that such things are officially not supported on Solaris. Though we seemed to get away with running those executables, built on a Sol10u3 sun4u system...on systems more current up to Sol10u11, and a few Sol11 systems and systems that are sun4v architecture.

    In a past life...we had run into a statically built executable (the installer) not working on our first UltraSPARC-III system (Sun v280r)...trying to recall what our build machine was back then.... my recollection says we only had the SPARCserver 20 and SPARCstation 10, before that. Though as I recall, we had to wait for a patch from Sun as well as rebuild the executable shared on the SPARCserver 20...to have it work. It wasn't long after that though that we retired support for sun4m, changing minimum requirements. Wonder if the application has become 64-bit yet? But, for ABI backwards compatibility claim to work, the executable needs to be built shared...so that it'll find the libraries provided on newer systems to allow older executables to still work.....

    portmaster probably didn't know that it should save /usr/local/libexec/cfengine/libpromises.so.1, though would the old executables know how to find the library when its moved aside? (I do have SAVE_SHARED=wopt uncommented in my portmaster.rc file).

    Occurs to me that I could just restore the file from backup, it would allow me to run

    failsafe.cf

    and get me to where everything should work again.

    Though before I did that, I had invoked cf-promises (the one in my path -- /usr/local/sbin), and it complains about library.cf. Guess it doesn't like the old cfengine_stdlib.cf, the new one isn't where the old one was....it was here instead --> /usr/local/share/cfengine/CoreBase/libraries/cfengine_stdlib.cf I do a quick look at what's in it....mainly to make sure that bundles/bodies that I use are still there...and notice some interesting new ones....such as a package_method of freebsd_portmaster, someday I should look at cfengine3 to do port/package promising....

    But first get cfengine working on policyhost, hopefully the other servers (at 3.4.4) are still working.....guess not, 3.4.4 doesn't like the 3.5.0 cfengine_stdlib.cf file. But, cf-promises is also not happy with some of my other promises....

    Guess I'll update those while I get policyhost working again.

    .
    .
    .

    Or perhaps I need to revert....

    root@zen:/var/cfengine/inputs 317# cf-agent
    2013-06-15T13:22:53-0500    error: Bundle 'crontab' listed in the bundlesequence is not a defined bundle
    2013-06-15T13:22:53-0500    error: Fatal CFEngine error: Errors in promise bundles
    1.755u 0.113s 0:01.94 95.8%     172+2501k 133+12io 1pf+0w
    root@zen:/var/cfengine/inputs 318# 
    # cf-agent -v
    ...
    2013-06-15T14:00:57-0500  verbose: Parsing file '/var/cfengine/inputs/do-crontab.cf'
    ...

    Its there, why's it not working.... 'cf-agent -d' doesn't work, but it will only do failsafe....

    Full story »

      09:12:00 am, by The Dreamer   , 390 words  
    Categories: Software, FreeBSD

    Perl update continued

    For some reason I had cd'd into /usr/local/lib/perl5 on dbox and noticed that 5.16.2 was still present...well 5.12.4 was still on zen after I the upgrade to 5.14.2... and it just had whatis files. But, I went and looked inside, and found more than just whatis files.

    Using 'pkg_info -W', I found that I had other ports that had installed perl modules that didn't start with 'p5-' or depend on 'libperl.so'.

    So, off to rebuild those ports.

    On dbox/cbox it was just databases/rrdtool, print/pdflib...plus some stray files left by already updated ports or removed ports. But, on zen there was a much bigger list of ports:

    security/clusterssh
    graphics/ImageMagick
    japanese/p5-Jcode (which was missed, because the package name is ja-p5-Jcode-)
    devel/perltidy
    mail/razor-agents
    security/clamtk
    print/foomatic-db-engine
    graphics/gscan2pdf
    x11-clocks/intclock
    databases/rrdtool
    print/pdflib

    Hmmm, probably need to update my i386 space, which is going to be wrong now...because uses the name make.conf of 'global', and I haven't updated it in a long time.... Not since May 4th. emulators/wine-devel has been updated since then, so I guess I'll have to tackle it sooner than later.... Especially, since I'm thinking of making another attempt to see if I can get other apps running in wine versus VirtualBox....

    Full story »

    06/14/13

      09:12:00 am, by The Dreamer   , 1412 words  
    Categories: Healthcare, Quicken/TurboTax

    Screwed by Quicken

    A while back I was having trouble updating transactions in my TIAA-CREF account. I used to update my entering each transaction by hand every so often, but then a few years ago (when I rolled over a Rollover IRA, which I had parked my 401k of my previous employer, as the invisible money fee was huge....share counts would drop every month, with no explanation in the account statements.) I let them generate the funds I should spread my retirement into...which makes it much harder to be entering transactions by hand.

    I had tried the Quicken download option, but the dates of the transactions didn't line up with my pay days or the website's transaction history. So, making the download match what I was entering was tedious, as was not entering any by hand and adjusting after download. Also, the download likes to splatter my register with placeholders, and the complain that the placeholders are missing information so it can't do gain calculations.

    So, originally, I'd only like invest in 6 +/- 1 funds. Basically a fund out each slice and some multiple of 5% rather than the specific percentage an investment tool had suggested for me.

    Now, I have my retirement funds spread of 12 funds in my Mandatory Plan, and 19 Funds in my Voluntary Plan (since the Voluntary has access to more choices than those specified by pension administrator.) The Mandatory Plan is funded by the mandatory 5.5% that comes out of every paycheck, plus an 8.5% match from employer. While the Voluntary Plan is money that came from other sources, which could be in the form of additional deductions from pay. But, in my case it represents what was in my previous 401k.

    So, I just let the Quicken download be as it is....deleting most of the placeholder transactions, because the only transaction that doesn't appear anywhere is the share count growth of my TIAA-CREF Traditional Annuity. But, I just change them into reinvestment actions with a price of $1. Not sure how I would get quicken to tell me what the gain/loss % is from that....

    Somebody had described how to do the math to get include re-investments into the overall gain, perhaps I'll have to look into that someday.

    Anyways, I noticed that I somehow hadn't done a download in almost 2 months (since the download ranges are 30, 60, 90 or All), so I try to do it about every 30 days. Quicken doesn't seem too bright on knowing transactions that overlap the previous download aren't new, and it'll refuse to let me manually match them with the correct transaction, since it the transaction had already been matched (or created by a previous download). So, I have to delete some of the new transactions, along with all the placeholder entries...before accepting the rest.

    Normally this works great....even if the dates happen a day or two before payday. It makes the transfer from my associated cash account for my Mandatory Plan...sure it might go negative...but it all zeros out in the end, usually.

    But, then last fall, there was a weird extra $3 and change in my cash account. I kept looking for a missing transaction, but didn't see one. Eventually, I found that when it had done my annual birthday re-balancing...where it sells parts of some funds and Quicken transfers into my cash account, an then buys amounts in the other funds with Quicken transferring out of my cash account. It didn't do that when it added to my Wells Fargo Advantage Growth Fund Institutional. I fixed it by hand, somehow and continued on my way.

    Full story »

    2 4 5 6 7 8 9 10 11 ... 31

    Now instead of subjecting some poor random forum to a long rambling thought, I will try to consolidate those things into this blog where they can be more easily ignored profess to be collected thoughts from my mind.

    Latest Poopli Updaters -- http://lkc.me/poop

    bloglovin

    There are 20 years 5 months 27 days 1 hour 23 minutes and 54 seconds until the end of time.
    And, it has been 4 years 7 months 1 day 12 hours 39 minutes and 2 seconds since The Doctor saved us all from the end of the World!

    Search

    July 2017
    Mon Tue Wed Thu Fri Sat Sun
     << <   > >>
              1 2
    3 4 5 6 7 8 9
    10 11 12 13 14 15 16
    17 18 19 20 21 22 23
    24 25 26 27 28 29 30
    31            
    Google

    Linkblog

      XML Feeds

    Who's Online?

    • Guest Users: 28
    This seal is issued to lawrencechen.net by StopTheHacker Inc.
    powered by b2evolution

    hosted by
    Green Web Hosting! This site hosted by DreamHost.

    monitored by
    Monitored by eXternalTest
    SiteUptime Web Site Monitoring Service
    website uptime