Keep seeing this annoying message on FreeBSD, even though back on December 20th, 2013....I had set
"security.bsd.unprivileged_mlock=1" in /etc/sysctl.conf to try to finally address this problem.
The default RLIMIT_MEMLOCK resource limit is 64k, which I would think is more than sufficient.
So, it was time to research this problem in more depth.
Found that there's a DEBUG_SECURE_MEMORY define to see how much memory its trying to allocate. Which its trying to allocate some multiple of 16k blocks, which it later refers as pages. Which I seem to recall is Windows?, Solaris is 8k and most other systems are 4k (my FreeBSD system, its 4k). Well, its only trying (and failing) to mlock 16k. So, I tried overriding the constant to 4k. But, this also failed.
I had skimmed the man page, where it says:
Since physical memory is a potentially scarce resource, processes are limited in how much they can lock down. A single process can mlock() the minimum of a system-wide ``wired pages'' limit vm.max_wired and the per-process RLIMIT_MEMLOCK resource limit.
If security.bsd.unprivileged_mlock is set to 0 these calls are only available to the super-user.
Well, on my system vm.max_wired defaults to 1323555 and RLIMIT_MEMLOCK (ulimit -l) is 64.....so limit is 64k, right?
Wrong...delving into the Kernel source...I found that it first checks that the requested amount + the amount it already has doesn't exceed RLIMIT_MEMLOCK, and then that the requested amount + the amount wired system wide (
"vm.stats.vm.v_wire_count") is not greater than
Well, when I looked at
vm.stats.vm.v_wire_count it was 2020311....its already got more than
I feel a PR coming on....
1323555 (which is about 5GB) is said to be 1/3 of some maximum. I have a 16GB system, probably not contiguous...and there's probably some amount reserved....but 2020311 is about 7.7GB.
I did a
"sysctl vm.max_wired=2097152", and it took it (so put that into
/etc/sysctl.conf, too.) and now gnome-keyring-daemon can start without that message.
So, near the end of July, I started investigating (once again) on replacing my HP Photosmart 8450xi (which was now over 8 years old....bought it on June 30th, 2005 - Back from Vacation Tech Buying Spree?...setup on July 9th, 2005 - link
I had started looking some time before this, but was put off for a bit due to my experiences with the Brother DCP-7065DN -- link, since it seemed most of the choices out there were GDI and I'm moving to more and more heavily FreeBSD as my primary operating system.
Especially since it appears that 'box' finally called it quits on December 2nd, before I had started my journey home from Chicago TARDIS that day....and orac is inching close to its end, as the pair of ST2000DL003's which evidently only had 1 year warranties from June/September 2012 started going shortly into the new year. I was trying to use ddrescue to force sector remapping on the first drive, when the other drive has decided to vanish permanently. I had thought it was was DM's that had 1 year and DL's that had 5 years, perhaps I had it backwards....or its a question of when I purchased them, or how they were packaged.
Checking my order history, I purchased one drive on June as a bare drive and later in September as a retail kit. I haven't yet pulled the drives, so I can't look up the serial number for the vanished one, but Seagate's website says the one that is responding is out of warranty. Even if the other drive is still under warranty, not sure I want to deal with getting it exchanged for a refurb to create a solo 2TB drive. Can't think of not wanting raid given what I'll likely use it for. And, not sure I'd buy a different 2TB drive to be its mate (and it won't work with my other 2TB arrays, since its an advanced format 2TB drive...while the lraidz2 pool on zen used legacy format 2TB drives (which limits options of growing it non-destructively.)
Fortunately, I had copied one of the big volumes from it over to zen (along they way it got corrupted, so had been trying to copy it back from zen when the other drive died). And, files of the other volume (my pyTiVo store) should all be in backup, where I don't have space on zen to restore them yet.... I have pyTiVo on zen, but the content under it is different...and larger, so much that it is currently not being backed up. I haven't gotten made much progress on building the second backup server....guess I'll need to look at this sooner than later.
And, now it seems the other 2TB RAID-1 array on orac is dying. I just went ahead and failed the drive that was giving it issues. Not sure what to do with it...suppose I could try ddrescue on it and see what happens. The big volume on it had also been copied over to zen, so guess I'll update my HSTi's to point to zen instead of orac for their content. Another used to be for Time Machine backups, but I had moved that over to zen when I set up the new work laptop to do Time Machine backups on my home network. I was using that space as overflow from pyTiVo. And, another was for backups of various things, which I had stopped adding to as new backups are going to zen now. Its things like regular backups of my websites at dreamhost and 1and1, my router configs, serial console servers, and some other backups. I was also replicating some directories on zen to orac as backup (left over from when zen was a Windows 7 PC....which saved me from losing everything when it scrambled itself.)
But, back to my printer quest.
Today this message appeared, and I knew that I needed to find a socket with a QLIM smaller than QLEN=8, but couldn't remember what the formula was.
But, the topic had come up on the bind-users list back on November 14th, 2013, where the messages was about '16 already in queue'.
Where for months before this I had been getting messages for '10 already in queue', and the only tcp socket I found that might be a problem The only thing with a QLIM of 10 was the submission port on sendmail, which didn't make sense...and bumping it up didn't help.
And, searching my system for the pcb was a bust (using
lsof ‑i ‑Tfs | grep LISTEN or
Reducing end digits until I got matches, resulted in matches that didn't seem to fit.
So, I tried to ignore it....
When it popped up on the bind-users list. The discussion went to that the tcp-listen-queue default is 10. But, it didn't seem to apply in my case, until later when I did see some messages for "5 already in queue", because the base bind in FreeBSD 9.2 is 9.8.4-P2 where the default tcp-listen-queue is 3. It was changed to 10 in bind-9.9.
Anyways, when the thread came up on bind-users list, I decided that I needed to really dig for the answer. Searching through the kernel source, I eventually found my answer.
The message is reporting when QLEN > 3 * (QLIM / 2)
Aha...QLEN = 10 => QLIM = 6....which was my Socks5 proxy server (
Couldn't figure out how to change the listen queue in it through its configuration file, so I stopped using it. And, the messages stopped. I had filled out the proxy settings in
squid for http & https and
ss5 for Socks5....and evidently some update around the same time as when I upgraded to FreeBSD 9.2 (or perhaps FreeBSD 9.2 made the message show up for
dmesg?). Switching to using
squid for all protocols fixed it.
Meanwhile...while I was looking for that old message, which I had posted back on November 20th, 2013. I stumbled upon some older threads on freebsd-stable.
I was searching on home computer, where I'm subscribed to the list, while my work email isn't subscribed to the list... and all my old freebsd list emails have since been purged. Still trying to get my email back under control after switching providers...both personally and at work. Plan to let an old personal domain expire once the migration is fully done, but its going so slowly that I let it auto-renew last year...and perhaps forgetting to change to the default 2 year auto renew to 1 year was intentional? New expiration date is November 20th, 2015. It was an early domain that I had registered, before I knew that '-'s in domains are considered bad. There were a number of different blogs that I would try to leave comments at, and the comments would claim to go to moderation but actually get discarded. The owner of one site eventually responded saying the system automatically does that to domains with '-'s in them, since most of them are spam. But, he'll whitelist my domain for the future. (IIRC, it was about a different antispam patch he had written for our blogging platform, functionality that never made it into newer releases and hadn't gotten updated. Wishing something like it was back again.)
That made me wonder if another site, running under my employer's domain...with a '-' in it, was rejecting my comments under my work email account, because it has a '-' in it. Switching to the form without the '-', and the comments would appear. I suggested to the site owner that he should remove that filter or at least whitelist our employer's domain.
The threads were older, and associated with upgrading to FreeBSD 9.2....first thread was started on August 1st, 2013. Was for "8 already in queue", and later indicated that the system was for backups and did outgoing rsync's and also did NFS and Samba. The discussion talked of strangeness of only having a queue limit that small, and that the default limit (128) is like 20 times that. The last reply to the thread was October 7th, 2013. Another thread started on September 30th, 2013 for "193 already in queue", with the last reply on November 12th, 2013.
The main hanging point again was that the pcb couldn't be found...and the suspicion is that its how daemons fork processes to listen to sockets and/or to handle requests, plus that they might create all these things and then use fork to detach to run in the background. The last thread was about using dtrace to maybe see if the process could be found that way.
I've been meaning to play around with that, but when I had last tried...found that its a module, and
kldload dtrace wasn't the right way to load it.... its
kldload dtraceall Guess I've rebooted since then, so it should be right (and done automatically in
/boot/loader.conf.) Guess when I have time....
So, I wonder if I should reply to one or both of the threads....but first, its been a while since I blogged....so here I am.
As for today's message?
QLEN = 8 => QLIM = 5
At first I looked for the full address:
trimming, I eventually got:
nrpe? Hmmm, did that one new disk check push me over?
What else is 5?
10143, imapproxyd - wasn't accessing roundcube
9032, there shouldn't be anything accessing pyTiVo
2049, NFS hmmm....well, my MacBook Air might be doing a PowerNap and doing its TimeMachine backup to the NFS share on my FreeBSD server.
873, rsyncd - BackupPC is constrained against running more than 3 jobs at once, and at most 3 against this server (I break up my [bigger] systems so its not all backed up at once, using lockfile in DumpPreUserCmd, though I have exceptions on this server so that certain rsync shares aren't blocked if a really long backup is running (recently had an incremental take 1 day and 11 hours - at least on my FreeBSD/ZFS system I have a comamnd in DumpPreShareCmd to take a snapshot.... a couple of weeks earlier, I had an incremental take 1 day and 15.5 hours.
Tweaked some sysctl's, and deleted some old snapshots seems to have sped things back up.
So some of the messages convert to:
QLEN => QLIM ==== ==== 193 128 16 10 10 6 8 5 5 3
OTOH, "8 already in queue" is what the first thread in August had, and he had added about being a backup server that does output rsync and had also mentioned NFS (and Samba).
Additionally, in the output looking for QLIM == 5, were these lines
When I was previously looking for QLIM == 6, there were only the two tcp sockets, so it was only 50-50 on picking the culprit, and since the other was minidlna which I haven't done more than build/install it so far. It was really only the one socket to explain it, and it did clear up immediately once I stopped using it.
As for NRPE, there doesn't seem to be a way to change it easily....so I'll just see if the problem continues to happen, before investigating other solutions.
So, the announcement of FreeBSD 9.2 came out on Monday [September 30th], which I missed because I was focused on my UNMC thing. But, once it appeared, I knew that I was going to want to upgrade to it sooner than later.
From its highlights, the main items that caught my attention were:
But, I did start this upgrade on October 4th....where for an unknown reason, I launched the
freebsd-update process on cbox, the busier of the two headless servers. I suspect I went with doing the upgrade on my headless servers, because they are entirely running on SSD and would likely see the benefit of lz4 compression. And, perhaps I did cbox, because it was the system that could most gain from lz4.
It took a couple iterations through
freebsd-update, before I got an upgrade scenario that could proceed. And, it took a long time given the high load that is cbox.
That is cbox is an Atom D2700 (2.13GHz, dual core) processor. And, cacti (especially with the inefficient, processor/memory intensive percona monitoring scripts -- might help if only scrpt server support worked, and wasn't just a left over from what it was based on.) being the main source of load. That is usually in the 11.xx area, except during certain other events (like, since 3.5, when
cf-agent fires...cbox is set to run at a lower frequency than my other systems.) or when the majority of logs get rotated and bzip'd. And, there's also some impact when zen connects to
rsyncd each day for
backuppc. But, these spikes weren't that significant. Though the high load would cause
cf-agent runs to take orders of magnitude longer than other systems, including its 'twin' dbox.
Also ran into a problem (again?) where a lot of the differences that
freebsd-update needed resolved were differences in revision tags....some as silly as '9.2' vs '9.1', others had new time stamps or usernames, but seldom any changes to the contents of the file. Which I then discovered a problem from having some of these files under
cfengine would revert these files back to having '9.1' revision strings, which confused the
freebsd-update. I ended up updating all the files in
cfengine to have the 9.2 versioning, though I thought about just removing/replacing it with something else entirely, though wasn't sure the impact that would have on current/future
Though it did seem to cause problem with the other two upgrades, where it would say that some of these files were now removed and asked if I wanted to remove these. Which doesn't make sense, since it didn't say that with the first upgrade. It was probably just angry that these files already claimed to be from FreeBSD 9.2.
It also didn't like that I use
sendmail, therefore my sendmail configs are specific to my configuration, or that I use
printercap is the one auto-generated by cups, etc.
But, once it got to where it would let me run my first "
freebsd-update install". I ran it, rebooted, ran it again, rebooted, updated stuff (though it didn't complain as much, perhaps because some of the troublesome kernel mod ports had corrected the problem of installing into
/boot/kernel, or perhaps enough stayed the same between 9.1 and 9.2, that things didn't freak out like before. And, this includes the virtualbox kernel mod, when I did the upgrade on zen, and later mew. But, I re-installed these ports and lsof. I did a quick check of other services, and then upgraded the 'zroot' zpool to have feature flags (which now means it no longer has a version, apparently instead of jumping the numbers to distinguish from Sun/Oracle it has eliminated having version numbers (for beyond 28) and having flags for the features added since. Wonder if the flags capture all has changed since 28, since I thought there have been other improvements internal that aren't described by version numbers. Namely, I seem to recall that there have been improvements in recoverability....namely it had been suggested, when I was trying to recover a corrupt 'zroot' on
mew, to try finding a v5000 ZFS live CD. Which I don't think I ever found, and gave up anyways when I concluded the level of corruption was too great for any hope of recovery and that I needed to resort to a netbackup restore, before the last successful full get's expired. Though being that it was nearly 90 days old, the other two month fulls didn't exist due to system instability that eventually caused the corrupted zpool (eventually found to be a known bad revision of the Cougar Point chipset and a bad DIMM...things seem to finally be stable from using a SiI3132 SATA controller instead of the on board, and getting that bad DIMM replaced....was weird that it was a Dell Optiplex 990, purchased new over a year after the problem had been identified and a newer revision of the chipset was released. I did eventually convince Dell support to send me a new motherboard and replace the DIMM. The latter was good, since I had to use DIMMs from another Dell that had been upgraded, so I had less memory for a while. But, while at first I did use the onboard SATA again, eventually I started having problems that would result in losing a disk from the mirrored zpool, to eventually causing a reboot where they would both be present again [though gmirror would need manual intervention]....and moving back to the SiI3132 has finally gotten things stable again. Though the harddrives in mew are SATA-III, so it would've been desirable to have stayed on the SATA-III onboard ports, where it was these ports that were the main source of problems in the prior defective version. Perhaps the fact that the prior version had a heatsink and the new version didn't, wasn't because they didn't need it to try to compensate for the problems caused by over-driving the silicon for the SATA-III portion. But, an oversight with the newer revision motherboard. The problem did tend to occur in the early morning hours on the weekend, when not only is there a lot of daily disk activity, but there is also a lot of weekly disk activity, etc. Oh well.)
So, after upgrading the zpool, and reinstalling the boot block/code. I then rebooted the system again. I had already identified the zfs filesystems where I had 'compression=on', so had written a script to change all these to 'compression=lz4'. Which I now ran.
And, then I turned my attention to doing dbox.
Pages: 1· 2
On my FreeBSD system, my
apache webserver would get angry whenever I update the
php & extensions ports. Requiring a bunch of other operations after the '
Since I've been playing around with CFEngine 3, I had started to add to my "
bundle agent apache", to do more than just promise config files current, process running, reloads, etc.
So, one of the first problems I had run into on FreeBSD, is that there are certain extensions that need to be in order in '
/usr/local/etc/php/extension.ini'. Which is solved by using
Well, fortunately when this script is run it results in a backup file of '
extensions.ini.old' which is the same age or newer than '
CFEngine3 can take care of it this way:
g.apache is "
apache22" currently on FreeBSD, and "
apache2" on Ubuntu. Someday it might become "
apache24" on FreeBSD.
Since I did FreeBSD first, and I'm still working on getting my one of 4 (or less) Ubuntu rolled in, I have:
g.rc_d as "
g.lrc_d as "
/usr/local/etc/rc.d" for FreeBSD. They are both set to "
/etc/init.d" for Ubuntu. I also have a
g.init_d for Ubuntu, but not FreeBSD. Not sure which I'll use where....I suppose if its an OS specific case,
g.init_d would get used and if its not...then which ever one is the correct one for FreeBSD will get used.
Pages: 1· 2
There are Unix servers at work that have uptimes in the >1000 days, there are even servers with updates in the >2000 days, in fact there are servers that have now exceeded 2500 days (I'm looking at one with 2562+ days.)
On one hand there are SAs that see this as a badge of honor or something to have had a system stay up this long. OTOH, its a system of great dread.
A while back this system was having problems....its Solaris and somebody had filled up /tmp....fortunately, I was able to clean things up and recover before another SA resorted to hard rebooting it.
The problem with these long running servers, especially in a ever changing, multi-admin shop, is that you can't be sure that the system will come back up correctly after a reboot.
We've lost a few systems at work due to a reboot. Some significant ones as simple as replacing a root disks under vxvm and forgetting to update the sun partition table, or a zpool upgrade and forgetting to reinstall the boot. To more significant ones, where a former SA had temporarily changed the purpose of an existing system all by command line and running out of /tmp...so that after its been up for 3+ years and he's been gone over a year....patching and rebooting makes it disappear.... the hardware that the system was supposed to be on needed repair, but he had never gotten around to it.
It'll be interesting to see what happens should the system ever get rebooted.
So, what brought this post one?
So, what started as take a week to set up a new nagios server at work ended up taking almost a month...because there were many days where I'd only have an hour or less to put some time into the side task. The other stumbling block was I had decided that the new nagios server configuration files would get managed under subversion, instead of RCS as it had been done in the previous two incarnations. New SA's don't seem to understand RCS and that the file is read-only for a reason...and its not to make them use
:w! ... which lately has resulted in a the sudden reappearance of monitors of systems that had been shutdown long ago.
Though now that I think of it, there used to be the documented procedure for editing zone files (back when it was done directly on the master nameserver and version controlled by RCS.) Which as I recall was to perform an
rcsdiff, and then use the appropriate workflow to edit the zone file.
% rcsdiff zonefile if differences % rcs -l zonefile % ci -l zonefile make rude comment that somebody made edits % vi zonefile % ci -u zonefile else % co -l zonefile % vi zonefile % ci -u zonefile fi
But, when I took over managing DNS servers, I switched to having cfengine manage them and the zone files now live under
masterfiles, so version control is now done using subversion. Had started butchering the DNS section in the wiki, probably should see about writing something up on all the not so simple things I've done to DNS since taking it over...like split, stealth, sed processing of master zone for different views, DNSSEC, the incomplete work to allow outside secondary to take over as master should we ever get a DR site, and other gotchas, like consistent naming of slave zone files now that they are binary.
Additionally work on the nagios at work was hampered by the fact that for Solaris and legacy provisioning is CF2, and the new chef based provisioning is still a work in progress...where I haven't had time to get into any of it yet. So, I had to recreate my CF3 promises for nagios in CF2.
But Friday before last weekend it finally reached the point where it was ready to go live. Though I've been rolling in other wishlist items and smashing bugs in its configuration, and still need to decide what the actual procedure will be for delegating sections of nagios to other groups.
One of the things I had done with new nagios at work, was set up PNP4Nagios...as I had done at home. And, while looking to see if I needed to apply performance tweaks to the work nagios, all the pointers were to have mrtg or cacti collect and plot data from nagiostats. Well, a new work cacti is probably not going to happen anytime soon, and the old cacti(s) are struggling to monitor what they have now (I spent some time a while back trying to tune one them...but its probably partly being hampered by the fact that its mysql can use double the memory that is allocated to the VM. though reducing it from running 2 spine's of 200 threads each...on the 2 CPU VM to a single spine with fewer threads has helped. Something like the boost plugin would probably help in this case, but the version of cacti is pre-PIA. But, it could be a long time before it get's replaced (not sure if upgrade is possible....) Our old cacti is running on a Dell poweredge server that has been out of service over 6 years... with the cacti instance over 8 years old (Jul 8, 2005)....and the OS is RHEL3.
Anyways, it occurs to me that there should be a way to get PNP4Nagios to generate the graphs, and I search around and find check_nagiostats. Though no template for it. Oh, there's a template nagiostats.php, if I create a link for check_nagiostats.php it should get me 'better' graphs. Which is what I have CF2 do at work.
So, recently there was a 'long' 4th of July weekend....on account that I opted to take Friday (the 5th) off as well.
I kind of thought I would tackle a bunch of different projects this weekend, though I've pretty much shelved the idea of re-IP'ng my home network. Perhaps something to do when I get my configuration management better fleshed out.
What I decided was that it looks like its just one last thing on one of the two Ubuntu servers that I'm retiring. So, I figured I'd quickly move that and then go onto the next thing. In the end, I didn't get it completed until Monday night.
For background, some years back...after my return to IRC, I had initially gone with Chatzilla (being that Firefox was my standard browser), which later moved to xulrunner and Chatzilla so it was independent of my browser. Though it was kind of annoying having it running at work and at home, and somewhat confusing for co-workers that ran text based IRC clients in screen somewhere and ssh'd in, etc. Most people that did this, were doing irssi.
So, I initially built it from source and was running on my old RedHat 7.3 server, and that was usable. Later when I setup an Ubuntu box to replace that server (the hardware had previously been SuSE....acting as an internal router for ivs status tracking....) It evolved, in that I would start screen detached from rc.local....which was important since the system would see patches on a regular basis, requiring reboots....which is kind of a reason for switching to FreeBSD.
Over time, I would make little tweaks here and there, to this irssi setup. Like twirssi, doing ssl, and later bitlbee to integrate Facebook chat (came across some stuff that I should add now...)
And, incorporating other tweaks I come across online when there's some problem that becomes sufficient bothersome that I want to address. The one problem I haven't haven't been able to solve is keeping server/system messages confined to the one window. Namely keeping system CRAP going to the system window, and allow channel CRAP to show up in the channel windows....but instead I'll get system CRAP in whatever channel window is active. Which is annoying because its usually the work channel. Where it be just signal and no noise.
I had started to move things more than a month ago, in that I built irssi and bitlbee (including the cfengine3 promise for it...not really much config wise for cfengine to manage for irssi...though I envisioned promising that its running all the time, though irssi has generally been stable everywhere else that I've run it.
But, the I got distracted by other cfengine3 work. Even though things started to get pressing when twirssi stopped working, due to API 1.0 going away...so I had to update Net::Twitter and twirssi. Updating twirssi wasn't that hard to do, but Net::Twitter was a problem, so I opted to remove it and its dependencies and then installing it and its dependencies using CPAN.
I also made note to install
net/p5-Net-Twitter from ports on dbox.
twirssi seems to be having other issues, which I had intended to investigate...perhaps after I move... But, that was like a month ago....
Pages: 1· 2
This is an update to the "ddclient & squid" here
Ran into a new problem recently....though the need for SSL in squid on ubuntu is deprecated, by the fact that I'm slowly replacing this server with a FreeBSD server.
As a result, I don't pay attention to this ubuntu server as much as I used to, so I've configured unattended-upgrade. It was installed, but it didn't seem to do anything in that on other servers I'd log in to find that there are lots (40+) of patches available and more than half that are security. Since I came across how to configure it to do more than just security patches, including send me email and on some systems automatically reboot when necessary. (should've thought to see how unattended-upgrade is configured and doing such things in the Ubuntu AMI I have in AWS)
Since I got unattended-upgrade configured on this old server (32-bit Ubuntu Server, which I've heard they have a 12.04LTS download for??? They had said they dropped 32-bit server support, so there was version with 10.04LTS. So I couldn't upgrade and now I'm way past EOL, which is causing problems...probably need to hunt down the landscape and ubuntuone services and nuke them, instead of letting them degrade my server for being EOL.) I've also had to update packages on here from outside sources to keep things running, so guess I should work harder on abandoning this server.... Where it'll likely get reborn as [yet ]a[nother] FreeBSD server....along with the server that I think I have all the parts collected for it, but just need to sit down and put it together. It started as a mostly function pulled 1U server, in need of ... well either new fans or a new case.... I opted for the new case route. It also needed drives and memory. But, as a result of the new case route...aside from case/powersupply...it meant I would need to get heatsinks...since the passive ones based on the 1U case channeling air flow....would be hard to recreate in the tower case I went with. Its a huge tower case, given that its an E-ATX motherboard...yet it isn't a full tower (like the formerly windows machine called TARDIS...someday I'll work its regeneration....need money to buy all the bits and pieces that'll make that up, which I haven't fully worked out what those will be....or where it'll go since my dual 23" widescreen FreeBSD desktop has consumed all of the desk that it would've shared....and not really keen on the idea of a KVM for this situation. )
Anyways...every day I get an email from unattended-upgrade for this system.... with:
Unattended upgrade returned: True Packages that are upgraded: squid-common Packages with upgradable origin but kept back: squid squid-cgi Package installation log: Unattended-upgrades log: Initial blacklisted packages: Starting unattended upgrades script Allowed origins are: ["['Ubuntu', 'heron-security']", "['Ubuntu', 'heron-updates']"] package 'squid' upgradable but fails to be marked for upgrade (E:Unable to correct problems, you have held broken packages.) Packages that are upgraded: squid-common Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg_2013-07-06_08:05:42.056193.log' All upgrades installed
This is because of that quirk where even though I rebuilt my version with SSL, and kept it the same version...it wants to install its version to replace mine (of the same version). Which is why I did the hold thing.
I could do the alternative of add a string to make my version advance from current....though I suppose I won't unhold...so that unattended-upgrade won't upgrade should such a thing appear (unlikely since both the OS and squid are ancient...and there'll be no more updates.) But, the intent is to hopefully silence unattended-upgrade in this matter.
Though kind of surprised its still doing something....hmmm, guess there was a new security patch to squid 2.7 back on January 29, 2013....that I've been missing (suppose its already downloaded the update in its 'cache'....or the backend is still there, its just not getting updates beyond what's there....whatever, I think I'm down to one more service to move off....)
In retrospect, maybe what I should've done is switched the origin of my
sysutil/cfengine34 when 3.5.0 came out. Since, I see that cfengine-3.4.5 has recently come out, bug fixes to cfengine-3.4.4 were more of what I was after than new features. Though I am intrigued by what 3.5.0 appears to bring, and am considering making use of it...of course, by the time I get to it 3.5.1 or newer might be out.
OTOH, do I really want to build cfengine-3.4.5 in semi-usable package management system we use at work for building and maintaining packages for Solaris 9 and Solaris 10 SPARC, and Solaris 10 x64. The system builds everything 32-bit, though I'm pretty sure we don't have 32-bit hardware anywhere in the datacenter anymore. Though we still have a few Solaris 10 systems around.
% wget 'https://www.cfengine.com/source-code/download?file=cfengine-3.4.5.tar.gz' --2013-07-04 08:39:34-- https://www.cfengine.com/source-code/download?file=cfengine-3.4.5.tar.gz Resolving www.cfengine.com (www.cfengine.com)... 18.104.22.168 Connecting to www.cfengine.com (www.cfengine.com)|22.214.171.124|:443... connected. OpenSSL: error:14077458:SSL routines:SSL23_GET_SERVER_HELLO:reason(1112) Unable to establish SSL connection.
Seems to be a problem with a client using openssl 0.9.8 talking to a webserver using 1.0.0?
Guess there's a patch submitted against 0.9.8y.... http://firstname.lastname@example.org/msg32486.html
But, this will be a big mess at work....nothing is using 0.9.8y yet (though I've been meaning to build it so I'll be ready when there's a bind-9.9.3-P2...had started building 9.9.3 when there was a security advisory of problem introduced in that version...so I'm waiting for the next 'real' security patch to do the upgrade...though maybe I shouldn't, since the intent is for this to be the first 64-bit build....)
Not sure what I'm going to do about cfengine3 at work though....
Latest Poopli Updaters -- http://lkc.me/poop
|<< <||> >>|
netflix cpap orac zen freebsd ebay appletv replaytv tivo woot raid1 «windows 7» tv «doctor who» batteries dvd «hd movie» «tivo hd» eyeglasses upgrade «amazon prime» cox progressive «instant streaming» «powersource 400» «windows xp» box «air purifier» «chicago tardis» lhaven virtualbox mdadm boinc usb prescription 10.04lts backuppc linux twitter ubuntu raid tardis boxee cfengine3 «sans digital» ups «watch instantly» amazon.com dsl b2evolution