Tags: cacti

06/08/13

  08:42:00 pm, by The Dreamer   , 1176 words  
Categories: Software, Computer, Storage, FreeBSD, CFEngine

Another weekend seems to be slipping away on me....

And, its the same time suck....cacti.

Last weekend got away from me, because I to make another attempt to improve cacti performance. I had tried adding 3 more devices to it, and that sent it over the limit.

I tried the boost plugin....but it didn't help, and only made things more complicated and failure prone. Evidently, updating rrd files is not a constraint on my cacti server. Probably because of running on an SSD.

I made another stab at getting the percona monitoring scripts to actually work under script server, but that failed. I suspect the scripts aren't reentrant, because of their use of global variables and relying on 'exit' to cleanup things it allocates or opens.

I had blown some previous weekend when I had tried to build the most recent version of hiphop to maybe compile the scripts, but after all the work in figuring out how to compile the latest 2.0x version...it would SEGV, just as the older lang/hiphop-php did after resolving the problem of building with the current boost (a template had changed to need a static method, meaning old code won't link with newer boost libraries without a definition of this.) And, this is beyond what I have in my wheelhouse to try to fix.

During the week, I had come across some more articles on tuning FreeBSD, namely a discussion of kern.hz for desktop vs servers. Where it being 1000 by default is good for desktops, but the historical setting of 100 being what to use for servers. Though IIRC, ubuntu uses 250 HZ for desktops and 100 HZ for servers, it also doesn't do preemption in its server kernel along with other changes (wonder if some of those would apply to FreeBSD?) Though modern kernels have been moving to be tickless. Which I thought was in for FreeBSD 9, though the more correct term is dynamic tick mode...and which is more about not doing unnecessary work when things are idle. Which isn't the case with 'cbox'. So, perhaps, fiddling with kern.hz and other sysctls might still be relevant. Though haven't really found anything detailed/complete on what would apply to my situation.

So, I thought I would give kern.hz=100 a shot.

At first it seemed to make a difference....no improvement in how long to complete a poll, but the load was lower. Until I realized that a service had failed to start after reboot. I had only run the rc script by hand, I hadn't tested it in a reboot situation. And, its not an rc script....it was used to be a single line in rc.local that worked on ubuntu and FreeBSD (except on one of the Ubuntu systems it results in a ton of zombie processes, so making it an init.d script that I could call restart on happened.

So, I spent quite a lot of time reworking it into what will hopefully be an accept rc script. One thing I had changed was that instead of using a pipe ('|') which was causing the process after the pipe to respawn and turn the previous process into a zombie each time the log file was rotated and "tail -F" announced the switch. And, this was while I was moving the service to FreeBSD (and management under cfengine 3.)

Though looking at my cacti graphs later....while the service had failed to start after reboot, it turned out to have been running for sometime, until I had broken it completely in trying to rc-ify the init script. Will, duh....I had cfengine set to promise that the process was running, and it had repaired that it hadn't started after the reboot.

Another thing I had done with I had init-ified the startup of this service, was I switched from using pipe ('|') to using a fifo, which addressed the respawning and zombie problem and eliminated the original reason to have an init.d script....

While the init.d script had worked on FreeBSD...it was just starting the two processes with '&' on the end then exiting. FreeBSD's rc subroutines do a bit more than that. So things weren't working. The problem was that even though I was using daemon instead of '&', so that daemon would capture the pid and make a pidfile. seems daemon wants the process it manages to be fully working before it'll detach. But, the process is blocked until there's a sink on the other end of the fifo. (does sink fit was the name for the fifo's reader?) I first wonder if I could just flip the two around, but I suspect starting the read process first would be just as blocked until the write process is started. So, I cheated by doing a prestart of the writing process and only tracking the reading process.

Though it took a bit more work to get the 'status' action to work....eventually found I needed to define 'interpreter' since the reading process is a perl script. And, the check_pidfile does more than just check to see if there's a process at the pid, but that its the right process. And, it distinguishes between arg0 and the rest.

Pretty slick...guess I need to do a more thorough reading of the various FreeBSD handbooks, etc. Of course, it has been 13+ years between when I first played with FreeBSD to its take over of my life now.

As for the tuning....it had made a small difference, but no improvement on cacti system stats. Basically the load average fluctuates a bit more and the CPU utilization seems to be a bit lower...though it could because the 4 lines of the cacti graph aren't so close to each other now.

Meanwhile...I noticed that one of the block rules in my firewall had a much higher count than I would expect, so I think I was about to get logging configured to see what that's about.....(which I was working on when I remembered that I hadn't rebooted after making the kern.hz change to /boot/loader.conf yesterday...the commit also picked up files that I had touched while working on moving the one remaining application on 'box', though that may get delayed to another weekend....perhaps the 4 day one coming up.)

I had set cf-execd's schedule to be really infrequent (3 times an hour), because I was doing a lot of testing and cf-agent collisions are messy....messier than they were in cfengine 2 (in 2 it usually just failed to connect and aborted, in 3 it would keep trying and splatter bits and pieces everywhere....which is bad when there are parts using single copy nirvana. resulting in services getting less specific configs, until the next run.

But, I sort of brought back dynamic bundle sequences.... but key off of "from_cfexecd", so I can test my new promise with less problems of colliding with established promises. Though there are other areas where things still get messy.... need to clean up some of the promises I had based on how things were done at work, so that the promises are more standalone.

Kind of weird using my home cfengine 3 setup, and other admin activities, as the means to break the bad habits I had picked up at work....

04/28/13

  10:06:00 pm, by The Dreamer   , 989 words  
Categories: Hardware, Software, Computer, Ubuntu, FreeBSD, CFEngine

Took a diversion from cacti and now its nagios

So, doing cacti on cbox doesn't seem to be working long term... but, the moment is being prepared for....I starting to assemble the pieces to build a new machine to do this and handle some other tasks that I've been looking for a place for.

Back to cfengine, I added a promise for dnetc (distributed.net)....and then a promise to finally configure CUPS on the two servers. And, then I turned to nagios.

I spent a couple evenings creating the initial configuration of nagios, working in design changes that I wanted to make and initial monitoring of localhost (dbox). Though it wasn't straight forward....there were differences here and there....mostly in FreeBSD layout, paths, and some of the commands taking different options. But, eventually I got everything running. My old check_dyndns worked once, but then stopped working.... problem was that it did 'stat -c "%Y" ..." which doesn't work on FreeBSD, 'stat -f "%m" ...' was the adjustment for that. All, while all the checks_* seem to be there, command definitions was lacking....but I guess having command definitions for everything is part of the debian/ubuntu packaging. There were other frills that came with that, that I don't mind not having...

I did run into check_ntp being deprecated....with check_ntp_time and check_ntp_peer being the tests to use....separating and making more clear on whether you're comparing time between servers using ntp or checking the state of the ntp server...
It did show some interesting oddities in holding NTP time on my home network.... I know that I should have 3 or more ntp servers, but it seems that I'm often landing in the state where I only have 2....with lots of delay, resulting in pretty good swings of jitter....almost makes me wonder if this something I could graph in cacti.... :hmm:

Wonder if I can find a cheap NTP appliance somewhere....

The last stumbling block was check_dhcp. Which seems to be broken on FreeBSD. All, the discussion on it seemed to point to firewalls, but no firewalls and it still didn't work....tcpdump on both places, and its saying it sending stuff, but no packets appearing on the network. But, I can see the other DHCP traffic on the network.

I remove that check and call it a night. I mull some possible work arounds....first one I tried was setting up linux compability and try running the check_dhcp from my working (ubuntu) nagios. Well, it didn't work...it couldn't find an interface. Oh well, guess there's the ugly way....use nrpe to invoke it. Though that didn't work right away.....probably because while I had created new nrpe configs for all my servers in cfengine, I haven't put any of my ubuntu servers under cfengine yet. Most of the other promises haven't been implemented for ubuntu yet. It was pretty simple to include nrpe.cfg for everything.... in fact it condensed to only 3 files.... a freebsd version, an ubuntu version and a host specific version for orac. Well, not right away...that happened more recently...while I was going through and updating the nrpe.cfg's by hand on the ubuntu servers. Was when I noticed that some of the files were only different in comments....so I made further simplifications in cfengine...which'll propagate out eventually....

Long term, I'll probably just have to track down some alternate implementation of check_dhcp....

I then add cbox to monitoring...and then looked to see about monitoring things that are on cbox/dbox...so I found checks for freeradius, cups, squid, along with improvements to checks on ntp. The check_squid was tricky....I got it working by hand, after making the suggested change for the default Cache type parsing, which turned out to be changes for squid3 vs. squid2 (but box is still running squid 2.7 - since I had re-built it by hand with SSL support, and blocked ubuntu from updating it. Orac wasn't blocked so it eventually turned into squid3.

it worked by hand, but wouldn't work under nagios...turned out that the embedded perl wasn't liking it. I was going to disable embedded perl for it, when I took a look at seeing what it was complaining about. And, did some reading on embedded perl.... the gist was "use strict", "perl -w" and "perl -c" as starting points. perl -w was find, but perl -c had one problem....which I fixed. But, no go. And, then noticed the line "# todo : use strict", guess I'll have to deal with that.

And, making that all happy, got it working.

The only other quirk was the memory check wouldn't work on FreeBSD, I guess there's no mallinfo() available for that. So, no running that test on those servers....plus no Cache test on box. But, it still left enough variety of tests that worked on all. And, it wasn't so much that I wanted to get all the information, but I choose to define all the different tests with ports set into the test....so running the check would also test that all my squid ports worked. There's actually only two that matter, but I have all my squid's configured the same, listening on 5 or 7 ports....depending on whether I have SSL enabled. Though I pretty much only need two now. I'm not doing transparent proxying and I don't need the SSL now that I've split box into dbox/cbox....the SSL was so ddclient could work on box and update dyndns via proxy to DSL....

Next up is adding zen to nagios, and coming with with more tests of things that are specific to zen, but covered or not covered in the old nagios.

Though as I worked along...there were things I couldn't find monitors for...though I realized that I could have cfengine promise that those services were running. Plus cfengine was also taking care of other things. So, I should probably work on writing some promises for zen. So, I can have promises to make sure things are started up again after a port is updated or that php/extensions.ini is reordered, etc.

But, I'll probably continue adding everything else to nagios first.

04/21/13

  10:52:00 pm, by The Dreamer   , 2431 words  
Categories: Software, Computer, Ubuntu, FreeBSD, CFEngine

Home server migration ran into some cacti

The home server migration that I wrote about on April 7th, hit a delay .... I started working on migrating cacti and nagios.

I probably should've started with nagios, since I don't think that would've taken as long as cacti has.

I had already been monitoring the new servers using my old cacti installation. I had pretty much decided that moving the old installation to the new servers wasn't going to straightforward.... partly because of versions, and no easy intermediary. But, I wasn't too worried about the historical data in my old cacti....

I figured that once I got things up and running, I'd just export the templates and import them into my new system and I'd be done.

But, then I hit a hitch....the squid templates I had weren't working on the new system....all I could find were old results about issues with doing SNMP to ports other than 161, and possibly due to newer versions of net-snmp....though that later turned out to be a wild goose.

Anyways...the work around was to use the proxy option in net-snmp. Though I recall having tried net-snmp before discovering bsnmpd on FreeBSD, but I gave it a shot.

Before I got to testing the proxy...I soon saw that it wasn't giving the same information as bsnmpd...specifically, for the HOST-RESOURCES-MIB and parts of UCB-SNMP-MIB. So, I decided that I could proxy net-snmp to bsnmpd and get those. But, that didn't work.....after some reading the answer was I needed to either map bsnmpd in somewhere else or exclude those areas from net-snmp.

Well, during the build of net-snmp, it did make reference to being able to set some variables in make.conf -- such as NET_SNMP_WITH_MIB_MODULE_LIST and NET_SNMP_WITHOUT_MIB_MODULE_LIST. And, by default NET_SNMP_WITH_MIB_MODULE_LIST contained "host disman/event-mib smux mibII/mta_sendmail mitII/tcpTable ucd-snmp/diskio sctp-mib if-mib"

So, I tried setting NET_SNMP_WITH_MIB_MODULE_LIST without host and ucb-snmp/diskio and tried to exclude the rest of ucb-snmp in NET_SNMP_WITHOUT_MIB_MODULE_LIST. Which got me a strange error about host being in both lists.

I delved into the Makefile, and found while the other settable NET_SNMP parameters were done as '?=' in the Makefile, the NET_SNMP_WITH_MODULE_LIST was done as '+='...with conditionals that '+=' the last two modules.

OSVERSION >= 700028 adds 'sctp-mib' and the port option MFD_REWRITES adds 'if-mib'....I had started looking at what the fix might be, but decided that all I needed to do was remove all these lines...since I'm going to have my own definition in my /etc/make.conf file.

Trying to exclude all of ucd-snmp wouldn't make things work....but I did an snmpwalk comparing bsnmpd and net-snmp, and decided that the two areas that were lacking were ucd-snmp/diskio and ucd-snmp/disk_hw. So, I recreated the 'original' NET_SNMP_WITH_MODULE_LIST in /etc/make.conf, without 'host' and 'ucd-snmp/diskio' and put 'ucd-snmp/disk_hw' in NET_SNMP_WITHOUT_MODULE_LIST. The build grumbled, but finished.

I that worked.....all my ucd/snmp host graphs were working on m new cacti server in the same detail that I was getting before (IE: the CPU Utilization gave traces for each of the 8 vCPUs...instead of just one.... I could see all the ZFS filesystems, not just the the single zroot.

So, I went back to looking at getting squid graphs to work....that didn't work.

Pages: 1· 2· 3

04/18/13

  12:47:00 pm, by The Dreamer   , 1259 words  
Categories: Software, Networking, AT&T DSL, Broadband, CFEngine

Zoom ADSL X3 5760 & Cacti

It was a dark and stormy...late afternoon...yesterday, and....

I had started out almost 7 years ago with a Siemens 4100 DSL Modem, which worked the way I needed it to for my home network. And, wasn't sure how easy it would be to find another like it. I was running it in the cross between router and bridge mode...so that my router could maintain my dyndns info (though it wasn't too long after that I moved that to ddclient on box, which has been more reliable...but I was having ddclient scrape from the router, though the ddclient for the router on my Cox connection wasn't supported so that uses checkip.dyndns.org. So, now both do.

Would probably be too much work to make ddclient go out on the right IP so that ip route will send it to the DSL router, so it can query the DSL modem for what the real external IP is. Though the new cbox/dbox setup would simplify things....but the migration has stalled as I've been working on getting cacti moved from box...and it hasn't been going well. Lots of old templates and such don't work on the new, so I've been reworking what I feel I can't live without....

That includes the graphs of my DSL modem stats....

Anyways....when the Siemens 4100 started dropping the connection a lot (around the 3 year mark) and changing the filter didn't help, I had heard that these things wear out... So, I tracked down a new Siemens 4100 on eBay...and switched to that....and that got things working again.... Then a couple years ago, things go bad consistently....though I could see from my cacti graphs that SNR drops in the evening. Though I wasn't able to get local service to restore/fix things. I tried the AT&T forum on dslreports.com, and they changed me to Interleaved, which helped....

But, I had started shopping around for a new DSL modem.... somewhere in my journey's I acquired a Zoom ADSL X3 5760 Modem. But, since things were working...I put it aside as my spare for when things stop. Seems I've had it so long that its no longer available....got it July 9, 2012 according to Amazon.com

For a while now, it would drop the connection now and then during the week (between its weekly self-reboot)...at first I suspected the router, since its twin had gone away in much the same way several months earlier. Though the router do also have failsafe configured, so if it can't talk (ping) to box or the WAN gateway...it reboots. Though at some point AT&T made their gateways unpingable. So, it was pinging google.

But, on April 6 it got really bad....my IRC connection was resetting practically constantly. Though since I had swapped the router before, and swapped it again. Though maybe now I wonder if its watchdog was too aggressive. Things were usable, but the line drops would be annoying. Also the IP staying the same through drops didn't make me question the DSL modem.

But, then on April 13, things start getting really bad....and I was getting 50+ messages a day from ddclient that my IP changed. It seemed to stablize a bit on Monday....though it was still dropping regular enough that I switched to using Cox for my IRC screen session. Was going to defer to the weekend to make the swap.

Well, yesterday the weather was bad...lots of lightning, rain....and I first display I looked at when I got home said "NO INTERNET". Though it was probably a temporary outage, because it did appear to eventually come back while I was working on unboxing my 'new' DSL modem. And, try to figure out how to set it up without the Windows wizard it provides or the lack of documentation with it...there was a small CD, which didn't really provide much depth....but I found what IP it would be and that it has web interface....it also has a telnet interface and an FTP interface.

Anyways...it turned out to be pretty straight forward getting it working...the hard part was figuring out what the non-default options meant, and whether I would want them.... the main one I turned on was "fullcone NAT". And, I set my router in with a reserved IP and made it DMZ host, so I can keep all my forwards there...plus the Zoom is limited to 16, which isn't enough .... though this may change when I make use of its DMZ feature as well (doing reverse proxy on cbox/dbox to everywhere else on my home network...running firewall on these boxes already, to implement policy based routing.) And, enabling ICMP on the WAN interface (its also possible to enable http, ftp and telnet on the WAN interface as well.)

Getting it working in Cacti again, turned out to be much harder.

Full story »

Pages: 1· 2

07/24/11

  11:49:00 am, by The Dreamer   , 2214 words  
Categories: Digital Photography, Hardware, Software, Computer, DVDs / NetFlix, WiFi, Storage

Morning Orac outage

Well, Orac was being somewhat unresponsive this morning.... Looks like early this morning the system stopped answering to Cacti, with the temperature of the GPU rising to a new high. Don't know what the high would've been, because there was no snmp data for a while. GPU temp did recover, but the display did not.

Ended up rebooting, needed the excuse to see external journal work....

Once backup, I switched back to internal journal...but one of maximum size rather than default size. And, then shutdown the system to extract the PCI Compact Flash card. During the reboot to see external journal work...I realized that I wasn't likely to see much performance gain from getting faster compact flash since the card itself was only UDMA/66. So the 133x CF is already faster than the card?

Probably should've fsck'd manually after the switch back....some discussion thread suggested doing this after fiddling with journal settings. But, its fsck'ng now after the boot back up. Apparently superblock had invalid external journal superblock hint? And, auto fsck failed, requiring manual fsck.

Downside of internal journal is that it can only be a max of 400MB (an improvement from the 128MB default I had before)...but that's only like 0.01% of my total space. And, there was a discussion thread that seemed to suggest a journal size of 0.2% would yield better improvement....I didn't do the whole 16GB CF card...though wonder how much that would've helped.... But 0.2% would call for a journal of ~10.9GB. Suppose I could turn off data journaling, especially since there are files that backuppc puts down that are bigger than the 400MB journal. Though I think the ordered/writeback modes usually just do metadata journaling, while I had switched it to data journaling when I was playing with external journals...but I didn't switch it back.

Wonder which of writeback or ordered would be more optimal for this FS? :??:

Full story »

Pages: 1· 2· 3

04/14/11

  07:57:00 am, by The Dreamer   , 1339 words  
Categories: Hardware, Computer, Storage, Ubuntu

Worked on Orac last night

Back on March 28th, when I last wrote about working on Orac, I mentioned that I looked at the harddrive cage to see about the condition of the fan on there....only to find that there was no fan there.

Reviewing the manual on the Gateway website, I found that the cage is used in more than one model...and that some of those models have fans, while evidently mine did not. Browsing the parts list for some of the other Gateway models that used the same drive cage, I found reference to a 60mm x 10mm fan, which I deduced was probably the fan that I would need to get for this location. I did find that from the Gateway manual for my model, that the motherboard did have a front chassis fan connector.

So, after some thought and checking first amazon.com, it struck me that eBay might be the better place to go. So I found a seller on eBay that explicitly said he shipped by USPS and bought one, and from another seller I got fan screws (a bunch of them, because I've needed them in the past before and I'm sure I'll have need for them in the future....as it was I didn't actually need them this time though.)

Because I had recently built my new backuppc pool (should be posting about that adventure some day), I had been waiting for a moment when Orac was idle again and not busy refilling the pool with full backups of everything....It hadn't gotten any fulls of Zen yet, it didn't detect that Zen had gone away to apply the recent Microsoft patches during its recent attempt, so I had to step in and stop it. So that seemed like a good time to take Orac down.

Pages: 1· 2· 3· 4

Now instead of subjecting some poor random forum to a long rambling thought, I will try to consolidate those things into this blog where they can be more easily ignored profess to be collected thoughts from my mind.

Latest Poopli Updaters -- http://lkc.me/poop

bloglovin

There are 20 years 9 months 21 days 47 minutes and 13 seconds until the end of time.
And, it has been 4 years 3 months 7 days 13 hours 15 minutes and 43 seconds since The Doctor saved us all from the end of the World!

Search

March 2017
Mon Tue Wed Thu Fri Sat Sun
 << <   > >>
    1 2 3 4 5
6 7 8 9 10 11 12
13 14 15 16 17 18 19
20 21 22 23 24 25 26
27 28 29 30 31    
Google

Linkblog

  XML Feeds

Who's Online?

  • Guest Users: 3
This seal is issued to lawrencechen.net by StopTheHacker Inc.
b2

hosted by
Green Web Hosting! This site hosted by DreamHost.

monitored by
Monitored by eXternalTest
SiteUptime Web Site Monitoring Service
website uptime