Categories: "Software" or "b2evolution" or "BOINC" or "CFEngine" or "Quicken/TurboTax"

Pages: 1 2 3 4 5 6 7 8 9 10 11 ... 31 >>

05/01/14

  07:00:00 pm, by The Dreamer   , 312 words  
Categories: Software, FreeBSD

gnome-keyring-daemon: couldn't allocate secure memory to keep passwords and or keys from being wr itten to the disk

Keep seeing this annoying message on FreeBSD, even though back on December 20th, 2013....I had set "security.bsd.unprivileged_mlock=1" in /etc/sysctl.conf to try to finally address this problem.

The default RLIMIT_MEMLOCK resource limit is 64k, which I would think is more than sufficient.

So, it was time to research this problem in more depth.

Found that there's a DEBUG_SECURE_MEMORY define to see how much memory its trying to allocate. Which its trying to allocate some multiple of 16k blocks, which it later refers as pages. Which I seem to recall is Windows?, Solaris is 8k and most other systems are 4k (my FreeBSD system, its 4k). Well, its only trying (and failing) to mlock 16k. So, I tried overriding the constant to 4k. But, this also failed.

I had skimmed the man page, where it says:

Since physical memory is a potentially scarce resource, processes are limited in how much they can lock down. A single process can mlock() the minimum of a system-wide ``wired pages'' limit vm.max_wired and the per-process RLIMIT_MEMLOCK resource limit.

If security.bsd.unprivileged_mlock is set to 0 these calls are only available to the super-user.

Well, on my system vm.max_wired defaults to 1323555 and RLIMIT_MEMLOCK (ulimit -l) is 64.....so limit is 64k, right?

Wrong...delving into the Kernel source...I found that it first checks that the requested amount + the amount it already has doesn't exceed RLIMIT_MEMLOCK, and then that the requested amount + the amount wired system wide ("vm.stats.vm.v_wire_count") is not greater than "vm.max_wired".

Well, when I looked at vm.stats.vm.v_wire_count it was 2020311....its already got more than vm.max_wired, wired! 88|

I feel a PR coming on....

1323555 (which is about 5GB) is said to be 1/3 of some maximum. I have a 16GB system, probably not contiguous...and there's probably some amount reserved....but 2020311 is about 7.7GB.

I did a "sysctl vm.max_wired=2097152", and it took it (so put that into /etc/sysctl.conf, too.) and now gnome-keyring-daemon can start without that message.

09/29/13

  10:53:00 pm, by The Dreamer   , 656 words  
Categories: Software, b2evolution

Started to update skins, but upgraded to 5.0.6 first.

This weekends project was to update the skins to 5.x.

From the early 0.8x days of this blog, I had settled on a customized version of the custom skin. Recustomizing it each upgrade was annoying, until I found that I could make my own version of it and it would likely work. Though if there were (bug/security) fixes, it was easier to find out what those were and apply them to my version of the skin.

So, I created an LKC skin for the blog.

This worked surprisingly well, when I upgraded from 4.1.7 to 5.0.5 last weekend. In that I made no changes to any of its files, and it pretty much worked. There was some breakage which I later found was due to some reorganization in global css files due to global css (which I could've fixed by copying the global css files from 4.1.7 down to the skin directory level. But, it was easy enough to fix up some html tags in index.main.php and "free html widgets". Plus I also removed some other widgets in the process, such as no more Flash Tag Cloud, or the flash twitter widgets (which I guess were broken since the twimg.com incident anyways, and doesn't seem to be available anymore).

This single instance of b2evolution, is also home two a couple other sites now (I used run separate instances, of the heavily customized nature of the early days for this blog, but the work in maintaining them all was a pain, and since they're all with the same hosting provider...going multidomain seemed the better way to go, though it has its challenges.

So after I updated 'LKC', all the code I had changed to get around the css problem needed to be changed back now that it wasn't a problem anymore. Well, it didn't have to be, but the HTML tags I used had been deprecated for quite some time, so it was kind of strange using them again to make things work for a while.

The I turned to the other sites, first is the photoblog site, which is using the included photoblog theme directly...with minor tweaks. I should probably split that off someday. But, only one file changed between 4.1.7 and 5.0.5, though I had pulled up some files from global into it to make some customizations. Though in 5, there's back office means to do the same thing...so to update this skin, I removed those specific customizations and moved the information into the back office. In fact, I'm not sure what if anything I've changed to it for its current appearance. Though there's some things I think could be done better if I had some time to put into it.

Then the other was using 'emerald', which was a 3rdparty skin. I mainly wanted something simple with 3 columns, with the level of customizations that fit my desires at the time. It was originally released for 2.4, but somebody else had updated it for 3.0 or newer. And, while it suffered from similar problems to other old skins that I could work around, I had a desire to make it consistent with 5.x themes. I had checked the forums, and there was one post of somebody who was working on updating their theme which had been based on this to fit 5.x. Though looking at their site, I wouldn't have know it was emerald .... and, there were any details on what he had done to making 5.x...or not sure if it was the issues that I was having.

So, I looked around at other 3 column themes to try. Soon, I decided that I would just use 'evopress', an included theme...and make customizations to it. So I copied it into a different directory and changed _skin.class.php appropriately. And, then made some code changes, namely to post.main.php (some embedded javascript, with PHP wrapping to check window size) and then the bulk went into style.css.

Now its late, and I have road trip to UNMC tomorrow....

09/22/13

  08:40:00 pm, by The Dreamer   , 577 words  
Categories: Software, b2evolution

Finally got around to upgrading to b2evolution-5.0.5

Well, I got the upgrade from b2evolution 4.1.7 to 5.0.5 done today. There had been a few failed starts over the previous few weekends.

I had a plan on how I was going to do it, which was aided the 3 way diffs between my site, the b2evolution-4.1.7 code and the b2evolution-5.0.5 code. Later I did a diff of just my site and the b2evolution-4.1.7 code.

Since it was easier to spot what I had done this way, since pretty much everything in the 5.0.5 side was changed... making it hard for the tool to show where my site differs from the 4.1.7 code.

I did that there was some cruft from previous updates or files that weren't part of the diffs. Perhaps diffs only contained files that had changed between point releases, and omitted files that were new. Or diffs and releases were different on how they handled reorgs. Hmm....

Anyways...in the end it was find what customizations I had done, and apply those changes to the 5.0.5 code. Though I later found that there is now a place in the 5.0.5 code to insert custom data instead of editing the _html_header.inc.php and _body_footer.inc.php. Wonder if I'll go back and try that. Currently, that only affects one skin. The other skins I use, I made copies of so I'll may need to see if they need to be brought up to 5.x. One of the custom skins is based on one that comes with b2evolution, but I've changed it so heavily that it was kind of painful patching it as part of every upgrade....until I went with making it separate. Don't know why I didn't do that with all of them. Though the other skin I may or may not need to update is not one that comes with b2evolution, so it may or may not have been updated for 5.x. Especially, since the current is for 3.x.

Kind of frustrating thing with b2evolution....the lack of current 3rdparty skins and plugins for it.

Full story »

08/04/13

  06:03:00 pm, by The Dreamer   , 976 words  
Categories: FreeBSD, CFEngine

Wonder if this will work - package_method=>freebsd_portmaster and /var/db/pkg

On my FreeBSD system, my apache webserver would get angry whenever I update the php & extensions ports. Requiring a bunch of other operations after the 'portmaster -a'.

Since I've been playing around with CFEngine 3, I had started to add to my "bundle agent apache", to do more than just promise config files current, process running, reloads, etc.

So, one of the first problems I had run into on FreeBSD, is that there are certain extensions that need to be in order in '/usr/local/etc/php/extension.ini'. Which is solved by using fixphpextorder.sh.

Well, fortunately when this script is run it results in a backup file of 'extensions.ini.old' which is the same age or newer than 'extennsions.ini'.

CFEngine3 can take care of it this way:

Code

vars:
 
    "ext_dir"       string => "/usr/local/etc/php";
 
    "ext_file"      string => "extensions.ini";
 
    "fix_php_ext"   string => "/usr/local/etc/fixphpextorder.sh";
 
classes:
 
    "need_fix_php" expression => isnewerthan("$(ext_dir)/$(ext_file)","$(ext_dir)/$(ext_file).old");
 
commands:
 
    need_fix_php::
 
        "$(fix_php_ext)"    contain => in_dir("$(ext_dir)");
 
        "$(g.lrc_d)/$(g.apache) graceful";

g.apache is "apache22" currently on FreeBSD, and "apache2" on Ubuntu. Someday it might become "apache24" on FreeBSD.

Since I did FreeBSD first, and I'm still working on getting my one of 4 (or less) Ubuntu rolled in, I have:

g.rc_d as "/etc/rc.d" and g.lrc_d as "/usr/local/etc/rc.d" for FreeBSD. They are both set to "/etc/init.d" for Ubuntu. I also have a g.init_d for Ubuntu, but not FreeBSD. Not sure which I'll use where....I suppose if its an OS specific case, g.init_d would get used and if its not...then which ever one is the correct one for FreeBSD will get used.

Pages: 1· 2

  12:25:00 pm, by The Dreamer   , 633 words  
Categories: Software, Computer, Networking

Apparently my dd-wrt does loopback now

A couple months ago I asked if mosh could be made to work if the mosh-server IP changes when roaming between networks.

Years ago, I used to have routers that did 'loopback', but haven't had ones capable of it for sometime...or so I thought. Though I hadn't really had a major need for it. Except perhaps for mosh.

mosh, MObile SHell, is an ssh replacement that supports roaming and intermittent connectivity. Since I do my IRC using irssi in screen, running all the time on a server at home. This makes staying connected to IRC on my laptop much nicer. I can close my laptop, and later open it and it'll still be connected to my screen session.

The problem was when I came home, I'd be unable to recover the connection correctly and the client goes into an unrecoverable state, so that even if I later use my laptop on an outside network the mosh session won't resume.

But, today I opened my laptop (and I just realized that I didn't do what I had intended to do) and I just minimized the window out the of the way...even though it probably wouldn't recover on Monday at work. But, the dock icon showed that something wanted my attention....probably mosh-client giving up? No. Well, my nick had come up a couple of times yesterday, but it shouldn't have known that....but not really thinking, I switch to the channel. And, it does. I switch around and its working. Wait...it shouldn't be though! :!:

So what changed? I do a tcpdump and see that it is connecting to my WAN IP and getting responses from my WAN IP....'loopback' never worked for me though....

:idea: Perhaps its 'loopback' of port forwards that has never worked....

I had moved irssi from box to dbox a while back. The router has two port forwards set related to this to box, a single port TCP forward and port range UDP forward.

But, because my other router is running stock firmware, it has a limited number of port forwards...so as I was migrating services to cbox (and using nginx to reverse proxy web services on other systems on my home network, where those that use a webserver are using apache, including local services...such as cacti on cbox and nagios on dbox), I decided that I would just make cbox the DMZ host...start running host based firewalls at home, especially on this host (it also uses an IP alias...kind of like how we do hosts behind the BigIP at work &#59;D )

So that means no port forward(s) for my dd-wrt router for WAN to dbox....so I guess the NAT allows 'loopback'ng in this case.

Wonder if the same applies to my other router.

The only problem this causes is that I had plans to replace routers. I actually have a new router to replace the current stock router....though I haven't got anything that really needs to speed upgrade to 802.11ac yet in the room where I using wireless bridging. I also had plans to replace my dd-wrt router, which had started getting unreliable which they seem to do after a while....though it seems to have helped after I deleted old traffic data....

Full story »

07/29/13

  09:06:00 pm, by The Dreamer   , 729 words  
Categories: Operating Systems, FreeBSD, CFEngine

The risk of high uptimes....

There are Unix servers at work that have uptimes in the >1000 days, there are even servers with updates in the >2000 days, in fact there are servers that have now exceeded 2500 days (I'm looking at one with 2562+ days.)

On one hand there are SAs that see this as a badge of honor or something to have had a system stay up this long. OTOH, its a system of great dread.

A while back this system was having problems....its Solaris and somebody had filled up /tmp....fortunately, I was able to clean things up and recover before another SA resorted to hard rebooting it.

The problem with these long running servers, especially in a ever changing, multi-admin shop, is that you can't be sure that the system will come back up correctly after a reboot.

We've lost a few systems at work due to a reboot. Some significant ones as simple as replacing a root disks under vxvm and forgetting to update the sun partition table, or a zpool upgrade and forgetting to reinstall the boot. To more significant ones, where a former SA had temporarily changed the purpose of an existing system all by command line and running out of /tmp...so that after its been up for 3+ years and he's been gone over a year....patching and rebooting makes it disappear.... the hardware that the system was supposed to be on needed repair, but he had never gotten around to it.

It'll be interesting to see what happens should the system ever get rebooted.

:?: So, what brought this post one?

Full story »

07/28/13

  01:14:00 pm, by The Dreamer   , 3144 words  
Categories: Software, Operating Systems, Ubuntu, FreeBSD, CFEngine

Last two weekends - nagios and more cfengine 2 & 3

So, what started as take a week to set up a new nagios server at work ended up taking almost a month...because there were many days where I'd only have an hour or less to put some time into the side task. The other stumbling block was I had decided that the new nagios server configuration files would get managed under subversion, instead of RCS as it had been done in the previous two incarnations. New SA's don't seem to understand RCS and that the file is read-only for a reason...and its not to make them use :w! ... which lately has resulted in a the sudden reappearance of monitors of systems that had been shutdown long ago.

Though now that I think of it, there used to be the documented procedure for editing zone files (back when it was done directly on the master nameserver and version controlled by RCS.) Which as I recall was to perform an rcsdiff, and then use the appropriate workflow to edit the zone file.

% rcsdiff zonefile

if differences

      % rcs -l zonefile
      % ci -l zonefile
        make rude comment that somebody made edits
      % vi zonefile
      % ci -u zonefile

else

      % co -l zonefile
      % vi zonefile
      % ci -u zonefile

fi

But, when I took over managing DNS servers, I switched to having cfengine manage them and the zone files now live under masterfiles, so version control is now done using subversion. Had started butchering the DNS section in the wiki, probably should see about writing something up on all the not so simple things I've done to DNS since taking it over...like split, stealth, sed processing of master zone for different views, DNSSEC, the incomplete work to allow outside secondary to take over as master should we ever get a DR site, and other gotchas, like consistent naming of slave zone files now that they are binary.

Additionally work on the nagios at work was hampered by the fact that for Solaris and legacy provisioning is CF2, and the new chef based provisioning is still a work in progress...where I haven't had time to get into any of it yet. So, I had to recreate my CF3 promises for nagios in CF2.

But Friday before last weekend it finally reached the point where it was ready to go live. Though I've been rolling in other wishlist items and smashing bugs in its configuration, and still need to decide what the actual procedure will be for delegating sections of nagios to other groups.

One of the things I had done with new nagios at work, was set up PNP4Nagios...as I had done at home. And, while looking to see if I needed to apply performance tweaks to the work nagios, all the pointers were to have mrtg or cacti collect and plot data from nagiostats. Well, a new work cacti is probably not going to happen anytime soon, and the old cacti(s) are struggling to monitor what they have now (I spent some time a while back trying to tune one them...but its probably partly being hampered by the fact that its mysql can use double the memory that is allocated to the VM. though reducing it from running 2 spine's of 200 threads each...on the 2 CPU VM to a single spine with fewer threads has helped. Something like the boost plugin would probably help in this case, but the version of cacti is pre-PIA. But, it could be a long time before it get's replaced (not sure if upgrade is possible....) Our old cacti is running on a Dell poweredge server that has been out of service over 6 years... with the cacti instance over 8 years old (Jul 8, 2005)....and the OS is RHEL3.

Anyways, it occurs to me that there should be a way to get PNP4Nagios to generate the graphs, and I search around and find check_nagiostats. Though no template for it. Oh, there's a template nagiostats.php, if I create a link for check_nagiostats.php it should get me 'better' graphs. Which is what I have CF2 do at work.

Full story »

Pages: 1· 2· 3

07/14/13

  05:58:00 pm, by The Dreamer   , 2452 words  
Categories: Software, Ubuntu, FreeBSD, CFEngine

Moving irssi

So, recently there was a 'long' 4th of July weekend....on account that I opted to take Friday (the 5th) off as well.

I kind of thought I would tackle a bunch of different projects this weekend, though I've pretty much shelved the idea of re-IP'ng my home network. Perhaps something to do when I get my configuration management better fleshed out.

What I decided was that it looks like its just one last thing on one of the two Ubuntu servers that I'm retiring. So, I figured I'd quickly move that and then go onto the next thing. In the end, I didn't get it completed until Monday night.

For background, some years back...after my return to IRC, I had initially gone with Chatzilla (being that Firefox was my standard browser), which later moved to xulrunner and Chatzilla so it was independent of my browser. Though it was kind of annoying having it running at work and at home, and somewhat confusing for co-workers that ran text based IRC clients in screen somewhere and ssh'd in, etc. Most people that did this, were doing irssi.

So, I initially built it from source and was running on my old RedHat 7.3 server, and that was usable. Later when I setup an Ubuntu box to replace that server (the hardware had previously been SuSE....acting as an internal router for ivs status tracking....) It evolved, in that I would start screen detached from rc.local....which was important since the system would see patches on a regular basis, requiring reboots....which is kind of a reason for switching to FreeBSD.

Over time, I would make little tweaks here and there, to this irssi setup. Like twirssi, doing ssl, and later bitlbee to integrate Facebook chat (came across some stuff that I should add now...)

And, incorporating other tweaks I come across online when there's some problem that becomes sufficient bothersome that I want to address. The one problem I haven't haven't been able to solve is keeping server/system messages confined to the one window. Namely keeping system CRAP going to the system window, and allow channel CRAP to show up in the channel windows....but instead I'll get system CRAP in whatever channel window is active. Which is annoying because its usually the work channel. Where it be just signal and no noise.

Anyways...

I had started to move things more than a month ago, in that I built irssi and bitlbee (including the cfengine3 promise for it...not really much config wise for cfengine to manage for irssi...though I envisioned promising that its running all the time, though irssi has generally been stable everywhere else that I've run it.

But, the I got distracted by other cfengine3 work. Even though things started to get pressing when twirssi stopped working, due to API 1.0 going away...so I had to update Net::Twitter and twirssi. Updating twirssi wasn't that hard to do, but Net::Twitter was a problem, so I opted to remove it and its dependencies and then installing it and its dependencies using CPAN.

I also made note to install net/p5-Net-Twitter from ports on dbox.

twirssi seems to be having other issues, which I had intended to investigate...perhaps after I move... But, that was like a month ago....

Full story »

Pages: 1· 2

07/06/13

  09:32:00 am, by The Dreamer   , 659 words  
Categories: Software, Ubuntu, FreeBSD

Ubuntu squid with SSL

Link: http://lawrencechen.net/ddclient-aamp-squid

This is an update to the "ddclient & squid" here

Ran into a new problem recently....though the need for SSL in squid on ubuntu is deprecated, by the fact that I'm slowly replacing this server with a FreeBSD server.

As a result, I don't pay attention to this ubuntu server as much as I used to, so I've configured unattended-upgrade. It was installed, but it didn't seem to do anything in that on other servers I'd log in to find that there are lots (40+) of patches available and more than half that are security. Since I came across how to configure it to do more than just security patches, including send me email and on some systems automatically reboot when necessary. (should've thought to see how unattended-upgrade is configured and doing such things in the Ubuntu AMI I have in AWS)

Since I got unattended-upgrade configured on this old server (32-bit Ubuntu Server, which I've heard they have a 12.04LTS download for??? They had said they dropped 32-bit server support, so there was version with 10.04LTS. So I couldn't upgrade and now I'm way past EOL, which is causing problems...probably need to hunt down the landscape and ubuntuone services and nuke them, instead of letting them degrade my server for being EOL.) I've also had to update packages on here from outside sources to keep things running, so guess I should work harder on abandoning this server.... Where it'll likely get reborn as [yet ]a[nother] FreeBSD server....along with the server that I think I have all the parts collected for it, but just need to sit down and put it together. It started as a mostly function pulled 1U server, in need of ... well either new fans or a new case.... I opted for the new case route. It also needed drives and memory. But, as a result of the new case route...aside from case/powersupply...it meant I would need to get heatsinks...since the passive ones based on the 1U case channeling air flow....would be hard to recreate in the tower case I went with. Its a huge tower case, given that its an E-ATX motherboard...yet it isn't a full tower (like the formerly windows machine called TARDIS...someday I'll work its regeneration....need money to buy all the bits and pieces that'll make that up, which I haven't fully worked out what those will be....or where it'll go since my dual 23" widescreen FreeBSD desktop has consumed all of the desk that it would've shared....and not really keen on the idea of a KVM for this situation. :hmm: )

Anyways...every day I get an email from unattended-upgrade for this system.... with:

Unattended upgrade returned: True

Packages that are upgraded:
 squid-common 
Packages with upgradable origin but kept back:
 squid squid-cgi 

Package installation log:


Unattended-upgrades log:
Initial blacklisted packages: 
Starting unattended upgrades script
Allowed origins are: ["['Ubuntu', 'heron-security']", "['Ubuntu', 'heron-updates']"]
package 'squid' upgradable but fails to be marked for upgrade (E:Unable to correct problems, you have held broken packages.)
Packages that are upgraded: squid-common
Writing dpkg log to '/var/log/unattended-upgrades/unattended-upgrades-dpkg_2013-07-06_08:05:42.056193.log'
All upgrades installed

This is because of that quirk where even though I rebuilt my version with SSL, and kept it the same version...it wants to install its version to replace mine (of the same version). Which is why I did the hold thing.

I could do the alternative of add a string to make my version advance from current....though I suppose I won't unhold...so that unattended-upgrade won't upgrade should such a thing appear (unlikely since both the OS and squid are ancient...and there'll be no more updates.) But, the intent is to hopefully silence unattended-upgrade in this matter.

Though kind of surprised its still doing something....hmmm, guess there was a new security patch to squid 2.7 back on January 29, 2013....that I've been missing (suppose its already downloaded the update in its 'cache'....or the backend is still there, its just not getting updates beyond what's there....whatever, I think I'm down to one more service to move off....)

07/04/13

  05:59:00 pm, by The Dreamer   , 476 words  
Categories: Software, CFEngine

Upgrading from CFEngine 2 to CFEngine 3

I just learned of a key missing detail that would probably have helped lots of other CFEngine 2 sites make the transition to becoming CFEngine 3 sites.

All the sites, include CFEngine's have docs about Upgrading from CFEngine 2 to 3....

Where, the touch, or go in-depth, on conversion of policies from 2 to 3, extol how 3 is better than 2, and then offer vague options on how to upgrade (either in-place or replace)....

The most detailed explanation was a slide deck...which wasn't detailed enough.... that says "CF2 and CF3 designed to be interoperable", "Replace CF2 policies at your pace". How?

In-Place Upgrade

"Replace cfexecd with CFEngine 3's cf-execd" - Access controls remains untouched, runs cf-agent.

"Sample input files contain integration promises" - Launched automatically, Changes crontab

And, then get's in the steps:

  • Install CFEngine 3
  • Copy new inputs files to CF2 master repository
  • Remove any rules to reinstall CF2 or add cfexecd or cfagent to crontabs
  • Remove cfexecd from start up
  • Edit update.cf
  • set email options for executor in promises.cf
  • cf-agent --bootstrap

If all went well, you are now running CFEngine 3.

Bootstrap policy server using:

cf-agent --bootstrap --policy-server

  • Remove all rules and policies that are capable of activating CFEngine 2 components
  • Convert cfservd.conf into a server bundle
  • Place a reference to this in promises.cf
  • Add converted CFEngine 2 policies or create new CFEngine 3 policies
  • Done???? :??:

    Somethings missing....where's this interoperability taking place? Does CF3 know how to run CF2 policies? no... where's this replace CF2 with CF3 at my pace? Reads like its a full in-pace replacement of CF2 to CF3....

    So I finally made a reference about this on a list...

    Answer?!

    It's why the CF3 binaries have dashes in the name. So you can drop them into the CF2 working directory.... The trick is editing the exec_command in the executor configuration, that's the command for running the agent; modify it to run both agents (v2 and v3).

    Wow...that's kind of an important detail that's been missing!

    Full story »

    1 2 3 4 5 6 7 8 9 10 11 ... 31 >>

    Now instead of subjecting some poor random forum to a long rambling thought, I will try to consolidate those things into this blog where they can be more easily ignored profess to be collected thoughts from my mind.

    Latest Poopli Updaters -- http://lkc.me/poop

    bloglovin

    There are 23 years 3 months 16 days 15 hours 49 minutes and 14 seconds until the end of time.
    And, it has been 1 year 9 months 11 days 22 hours 13 minutes and 42 seconds since The Doctor saved us all from the end of the World!

    Search

    October 2014
    Mon Tue Wed Thu Fri Sat Sun
     << <   > >>
        1 2 3 4 5
    6 7 8 9 10 11 12
    13 14 15 16 17 18 19
    20 21 22 23 24 25 26
    27 28 29 30 31    
    Google
    Lawrence Chen's Profile | Create Your Badge
    Lawrence Chen's Facebook Profile


    Weather for Manhattan
    The Rail

    Linkblog

      XML Feeds

    Who's Online?

    • Guest Users: 5
    This seal is issued to lawrencechen.net by StopTheHacker Inc.
    powered by b2evolution

    hosted by
    Green Web Hosting! This site hosted by DreamHost.

    monitored by
    Monitored by eXternalTest
    SiteUptime Web Site Monitoring Service
    website uptime