This is another one of those items that is really just for myself, because it will certainly come up again…
I have a Synology NAS I use for backups, and once in awhile you end up with a filesystem error that it can’t fix, so the recommended solution is to copy everything off the backup volume, reformat the drives/volume and copy everything back (which I’ve done a couple times over the years). Most recently, one of the hard drives failed in the RAID (no biggie, popped in a new one and it rebuilt itself as expected), but also ended up with some filesystem errors for whatever reason. I decided this time I was going to figure out how to fix it without copying everything off and back on to it.
It’s currently running DSM 7.1.1, so what I did may or may not work for other versions (and I should also point out that this post is intended as a reminder to myself, I don’t recommend anyone doing it to their own system).
Step 1: Enable Telnet on the NAS (you can’t do this over SSH because of how some processes work, not going to go into the details here).
Step 2: Telnet in and shut down PostgreSQL via (the PostgreSQL service will automatically restart, preventing the volume from being unmounted): sudo systemctl stop pgsql
Step 3: Unmount the volume: sudo umount -f -k /volume1
This is one of those things that I *probably* won’t ever need to do again, but just in case I do, I can come back here to remind myself rather than figuring it out all again.
I have a use case where I need to have a Monero view-only wallet, but spending is only possible with a hardware wallet. Normally a hardware wallet won’t tell you your private view key (needed for a view-only wallet) because it tries to save you from yourself and “protect your privacy”.
In my case, I need an automated process to generate new Monero addresses so we know which customer sent funds (the deposit addresses are unique to each customer and we are able to cross-reference the address to the customer on our end). I also want to protect the private keys for the wallet so in a worst-case-scenario where the server was compromised, a hacker couldn’t steal the funds in the wallet.
A standard hardware wallet can’t generate addresses without a human because you need to physically confirm to export the private view key with the hardware wallet (a Ledger in my case). I’m not particularly worried about privacy issues if the server was hacked and the private view key were compromised (it’s for legitimate business purposes and taxes are paid, and if the server was hacked someone seeing how much Monero came in from customers is the least of my concerns).
A few hours of sifting through source code for Ledger’s Monero App, and I was able to reverse engineer the command wallets use to export the private view key (normally kept in memory and not disclosed to users).
TL;DR
I was able to keep the private keys solely in the hardware wallet for spending (always done with a human and the physical wallet), but also able to programmatically generate deposit addresses on the fly without a human by using Ledger’s library to interact with Ledger devices with the following shell command:
That command triggers a prompt on your hardware device to export your view key. If you confirm that, it outputs your private view key with 4 extra characters appended to it (don’t care enough to figure out what the purpose of those last 4 characters are, easy enough to just ignore them).
Now you can generate a view-only wallet using that private view key like so:
I have two XServe G5 servers that I haven’t used as web/database servers in a long time, but I do use them for other stuff (backup destination, historical monitoring of other hardware, etc.)
I haven’t had any problems with them in the 11 years that I’ve had them, except now both died within a week of each other (specifically, both power supplies died). Which I guess says something about manufacturing consistency to have both die at the same time. 🙂
I had a new spare because way back in the day I bought the XServe service parts kit (included spare motherboard, power supply, fans, etc.), so getting one of them back online was no big deal. Searching around to buy another one, and the power supplies are like $400. Yeah, no thanks… not for a machine that probably isn’t even worth that much as a whole.
So instead of spending $400, let’s see how nerdy I can be and just replace all the capacitors in the power supply for closer to $10 in parts and see what happens… You could actually get the parts for $1-2 if you are okay with getting junky capacitors, but probably a good idea to replace capacitors in a server power supply with good ones. 🙂
The place I got my capacitors didn’t have the 1uF or 10uF radial ones I needed, so I ended up getting a tantalum capacitor for the 1uF one and then a radial 10uF 50V instead of the 25V (you can use a higher voltage as long as the microfarad rating is the same).
First off, let me say that I love CloudFlare… and coming from me, that probably means something because there aren’t a lot of third party services that I think are great. But CloudFlare is one of those.
That being said, I still have a list of things I wish CloudFlare did (or did differently):
Have a “failover host” option for individual DNS records. For example route to host X, but if host X is down, route to host Y. Yes, I know you can do DNS management with the CloudFlare API, and I built a system that monitors servers and switches them if needed via DNS API. Just would be simpler if we had a “failover host” option.
Allow wildcard domain records to route through CloudFlare. This would be way more convenient.
Make the Authy two-factor code last longer (like 30 days for that computer?). It’s obnoxious that you have to generate a new two-factor code while being on the same computer every other day.
Geotargeting granularity. It would be nice if CloudFlare could geotarget more than the country… like long/lat/city level would be nice.
WebSockets support. Yes, I know it supports WebSockets for Enterprise users… and while I’m on a paid plan, I’m not on the Enterprise tier. Update: from the comments on this blog post, it looks like it might be coming. Yay!
Prepayment. For paid plans, I don’t know what happens if your monthly payment doesn’t go through for some reason, but I don’t want to find out. It would be nice if you could just be like, “I want to prepay for the next year so there’s no service interruption”.
Use HTTP/2 To Origin. Cloudflare doesn’t use SPDY or HTTP/2 to the origin server even when available. See: this tweet
Sync Data Centers. Cloudflare data center caching is great, but as more and more data centers come online, the benefit diminishes. Right now there are 74 Cloudflare data centers, which means a resource is requested 74 times (once per data center) for caching.
Email routing. It would be nice if you didn’t have to expose your server’s true IP addresses when sending an email (or just from your SPF records in your DNS). Having a service that lets your mail servers relay email through and erase the originating IP from the email header in the process would be super fantastic. Probably would be problematic because of potential spam implications, but it sure would be nice to have truly hidden server IPs without needing to get separate servers in a separate location for email.
CloudFlare is super rad and if you own a website without using it (even their free plan), you are doing it wrong. 🙂
My drier died recently, so it was time to buy a new washer and drier set. I ended up getting the new Whirlpool “smart” washer and drier with their “6th Sense Live” or whatever you want to call it. Basically they allow you to manage and view energy usage, have a companion mobile app, allow you to schedule the best times to run (based on electricity rates in your area), etc. For reference, the washer is model: WFL98HEBU, and the drier is model: WEL98HEBU.
The electricity saving stuff wasn’t a huge deal for me since I’m on solar and generate more electricity than I use. But being a stats nerd and a nerd in general, the other stuff sounded straight pimpin’.
Got them installed without any issues… fired them up and after poking around in the menus, I couldn’t find anywhere where you could configure it’s network connectivity (which I know they have/need). Opened up the manuals even (lol, wut???), and not a single word or mention of how to get these things on the network. If you don’t believe, me you can look at the manual online over here.
There was no “separate manual”, no nothing for connecting these things.
Finally, after mucking with the interface to no avail for about an hour, I opened up the door to get the model number so I could Google about how to set it up. Also nothing… WTF? Is this thing a scam? Does it not really even have connectivity?
Low and behold, I when opening the door, I noticed something… a sticker with some info… the Mac address, SSID and SAID. So I’m thinking to myself, “Why does this thing have an SSID? Does it have a wifi base station?” Sure enough, after grabbing my iPhone, and looking for wifi networks, there were 2 new wifi networks near me… one for the washer and one for the drier. If you try to log into them, they require a password… which just happens to work with the SAID on that sticker. Open up a web browser and you are given nothing but an option but to connect to the real wifi network (and you can enter a password for it). Rinse and repeat for the drier, and we are online! Sure would have been nice for them to mention that in the manuals.
Here’s the short version of this story… if you don’t know how to get your WFL98HEBU washer or WEL98HEBU drier connected to the Internet, they have their own networks you need to log into in order to configure their connectivity.
That being said, it’s a pretty cool washer/drier pair even if the manual sucks.
So I had an issue where ndb_restore was utterly failing and made it so it was impossible to restore ndbcluster tables from a backup. Trying to use it was flooding us with all sorts of sporadic errors…
[NdbApi] ERROR — Sending TCROLLBACKREQ with Bad flag
—————
theNoOfSentTransactions = 1 theListState = 0 theTransArrayIndex = 5
—————
Temporary error: 410: REDO log files overloaded (decrease TimeBetweenLocalCheckpoints or increase NoOfFragmentLogFiles)
—————
Temporary error: 266: Time-out in NDB, probably caused by deadlock
—————
Finally was able to work it out, but man, can I just say that ndb_restore really could use an overhaul to make it a little more friendly? Okay, go! 🙂
So I’ve been trying to find good LED replacements for the incandescent candelabra bulbs I have at my house (422 of them).
Last year the Phillips 2700K was the best I could find, but I still hated it. It’s ugly and gives off a weird purplish light… Yeah no thanks.
Earlier this year I tried one of the Archipelago 2700K LEDs… Eh… Kind of greenish light.
On top of it, I wanted something warmer than 2700K, which seemed impossible to find.
Finally I called China… Straight to an LED manufacturer, and they were able to send samples at 3 different color temperatures… 2100K, 2300K and 2500K.
And guess what? The 2300K is perfect… I finally found the perfect LED candelabra. Good color temp, the bulb itself isn’t ugly, dims just fine, etc… Needless to say, I ordered 422 of them direct from the manufacturer and they are about half the cost of anything else to boot.
40 watt bulbs going down to 3 watts as soon as the order is delivered. 🙂
This is mostly a reminder to myself… but maybe someone else will find it useful as well.
I migrated some databases to ndbcluster (some of the tables were fairly decent sized… 9GB for 1 table spanning 220M records), and was running into a problem where an ALTER TABLE to change the storage engine was spewing out some cryptic error message like so:
Cryptic mainly because it was coming from MyISAM, which doesn’t have record locking. Long story short is it’s not a setting in your my.ini file for mysqld, rather a setting in your config.ini for ndbd. The TransactionDeadlockDetectionTimeout setting defaults to 1200 (1.2 seconds), I ended up raising it to 24 hours just for purposes of migrating existing tables to ndbcluster (the setting of 86400000 ms is 24 hours).
Being relatively new to MySQL Cluster, it was also a good opportunity to practice making a config change and doing a rolling restart of the ndbd nodes to have no downtime.
So this has been driving me mad since I installed Mac OS X 10.7… Any time Cacti tried to output a graph, it would output, but it was painfully slow (like 30 seconds)… prior to installing 10.7, it was a fraction of a second.
Anyway… long story short is apparently RRDtool pulls the list of fonts from the system, and since it was running as the _www user, it was unable to write a the font cache to disk to make future font access fast.
Logging in as root and running the “fc-list” command to get a list of installed fonts fixed the problem. The first time it ran it took 30 seconds (imagine that! heh), and subsequent times it was instant. And Cacti graphs are back to being instant. Hopefully this helps someone…
Seagate is still not producing the hard drives I want for new servers in sufficient quantity, but they do seem to be actually producing them now. I’ve seen places where I can get 30 of them lately… but I need 72.
I’m assuming I’ll be able to get them relatively soon and have started to think about how I want to lay the servers out architecturally.
As far as hardware goes, gigabit ethernet just really isn’t that fast these days… Real world use has being able to pass about 80MB/sec (theoretical maximum is about 110MB/sec). When you have a cluster of servers passing massive amounts of data amongst themselves, 80MB/sec just doesn’t cut it. Which is why the servers have Infinband QDR (40Gbit interconnects), so they should be able to pass 3.5GB/sec (ish) between themselves.
For software/services, I’m thinking maybe 12 identical servers that are more or less service agnostic… with each server being a web server, MySQL Cluster data node and a MySQL Cluster SQL node. Then each server could probably be setup to handle 10,000 concurrent HTTP connections as well as 10,000 DB connections (1 per HTTP connection). With 12 servers, you could have the capacity of 120k concurrent web connections, 120k DB connections capable of doing millions of SQL queries/sec. If you need more capacity with anything, you could just bring online additional agnostic servers.
This of course is just in theory… who knows how it will test out in actual use, but it does seem like something worth testing at least.
I had a switch that “lost” it’s operating system and was stuck in a reboot loop. Long story short is I needed to upload the OS via a direct serial connection to fix it. I didn’t have a Windows machine available, so I used my Mac and a Keyspan USA-19 (USB -> DB9) thingie (heh). This is mostly a note for myself in case I need to ever do it again, and I forget.
Things that did not work…
ZTerm let me get to the console just fine, but uploading the firmware with XModem failed for some reason (and you only found out it failed after a 3.5 hour upload time).
Keyspan device on Windows via Parallels (from all the Googling I did, apparently it’s just an known issue with that piece of hardware and Parallels and the drivers will not recognize it).
Virtual serial port in Windows/Parallels via a app called SerialClient (it was very difficult to find this app since the site that made it doesn’t exist anymore… people running Parallels said this worked for them, but it was many years ago… so maybe it was an older version of Mac OS X. Either way, I couldn’t get the app to “connect”.).
What DID work (thank God because I was running out of options)…
Download/compile/install lrzsz via ./configure;make;make install.
From Terminal: screen /dev/tty.Keyserial1 57600
Hit CONTROL+A
Type: :exec !! lsx -b -X ~/firmware_image.ros
It was infinitely faster than ZTerm (about 15 minutes vs. 3.5 hours), but more importantly it actually WORKED.
Sadly, 12GB in a server just isn’t what it used to be and the database servers are really starting to feel strained as they beg for more memory. Handling ~25GB of databases on servers with 12GB is just no bueno. I thought of upgrading the RAM in the blades, but the maximum RAM the BIOS supports is 16GB, and all the DIMM slots are in-use… so it more or less would mean tossing all the existing RAM and buying 160GB of new RAM. In the end spending that much money to gain 4GB per server isn’t really worth it (especially since 16GB wouldn’t really be enough either).
I started to do a little research on what would be some good options for upgrades… I think I want to stay away from blades simply because we did have the daughter card in the chassis fail once which took down all 10 blades (the daughter card controls the power buttons). Buying 2 complete sets of blades/chassis is overkill just so you have stuff still up if one complete chassis goes down. The “must haves” on my list are hot swappable drives with hardware RAID, hot swappable redundant power supply, some sort of 10Gbit/sec (or higher) connectivity for communicating between servers. On top of it all, the servers need to be generally “dense” (I don’t want to take an entire rack of space).
PowerEdge C6100
The Dell C6100 actually looked really nice at first glace…
Pros
Super dense (4 servers in a 2U package)
Redundant power supplies (hot swap)
24 (!!) hot swap drives
Supports 10GbE or even QDR Infiniband (40Gbit/sec
Cons
Age – the server itself came out about 18 months ago without any refresh. That means you have hard drive options and CPU options that are a year and a half old
Price – OMG… a single unit loaded up with 4 nodes, drives, RAM, Infiniband, etc. works out to $54,709 (before tax).
The age factor really becomes an issue when it comes to the disk drives… you can get 15k rpm drives in a 2.5″ form factor, but Dell only offers 146GB models (there are 300GB models now). The CPU isn’t really too bad… Dell’s fastest offering is the Xeon X5670… 2.93Ghz, 6 core @95 watts (wattage is important because of so much stuff crammed in there). There is a slightly faster 95 watt, 6 core processor these days… the Xeon X5675… the same thing basically, just 3.06Ghz. 0.13Ghz speed difference isn’t a huge deal… but the hard drive difference is a big deal.
I started to think… well maybe I could just order it stripped down and then I could just replace the CPU/hard drives with better stuff. Doing that, you still end up spending about $60,000 because you end up with 8 Xeon processors (valued at about $1,500 each that you just are going to throw away).
Then I started to think harder… Wait a minute… Dell doesn’t even make their own motherboards (at least I don’t think so)… so maybe I could find the source of these super dense motherboards and build my own systems (or just find the source of something similar… which ended up being the case).
SuperServer 2026TT-H6IBQRF
What do we have here??? The Supermicro SuperServer 2026TT-H6IBQRF is more or less the same thing… 4 servers per 2U, hot swap hard drives, same BIOS/chip/controllers… supports the same Xeon processor family (up to 95 watts)… And as a bonus, Infiniband QDR is built in (it’s a $2,596 add-on for the Dell server) as well as an LSI MegaRAID hardware RAID card (a $2,156 add-on for the Dell server).
So let’s add up the cost to build one of these things with the CPU/hard drives I actually would WANT…
Chassis (includes 4 motherboards, 2 power supplies, hard drive carriers, etc. – $4,630.98
Xeon X5675 3.06Ghz CPU – $1,347.84 each, so $10,782.72 for 8
8GB ECC/Reg DIMM – $124.26 each, so $5,964.48 for 48
600GB Seagate Savvio 10K.5 – $391 each, so $9,384 for 24 (also 410% more capacity, 21% faster and 20% more reliable than the 146GB options from Dell)
Add It All Up…
$30,761.88 would be the total cost (a savings of $24,138.82) and in the end you get slightly faster CPUs and *way* better hard drives. So in a single 2U package, you end up with 48 Xeon cores at 3.06Ghz, 384GB of 1333Mhz memory, 14.4TB of drive space (9.6TB usable after it’s configured as double redundant parity striping with RAID-6… which should be able to do more than 750MB/sec read/write). 8 gigabit ethernet ports and 4 Infiniband QDR (40Gbit) I/O.
Get 2 or 3 of those rigs, and you have some nasty (in a good way) servers that would be gloriously fun to run MySQL Cluster, web servers and whatever else you want.
So I was in a situation where I want to upgrade the operating system on 10 blade servers that are in a data center. The problem is I really didn’t want to sit there and install the new operating systems on each one by one. The other issue is the servers don’t have CD/DVD drives in them since are blades.
I have a couple older Xserve G5s in the facility, so I figured why not use them as network boot servers for the Linux machines? By default OS X Server has a service for NetBoot (which is not the same thing and can really only be used to boot other Mac machines). But Mac OS X Server also has all the underlying services already installed to make it able to be a server for PXE booting from normal Intel BIOS.
So what network services do we need exactly (at least for how I did it)? DHCP, NFS, TFTP and optionally Web if you do an auto-install like I did.
Preface
This was written up in about 5 minutes mostly so I wouldn’t forget what I did in case I needed to do it again. Some assumptions are made like you aren’t completely new to Linux/Mac OS X administration. You can also have pxelinux boot to a operating system selection menu and some other things, but for what *I* wanted to do, I didn’t care about being able to boot into multiple operating systems/modes.
Setting Up The Server
Mac OS X Server makes it really simple to get DHCP, NFS and Web servers online since the normal Server Admin has a GUI for each.
Mac OS X has a TFTP server installed by default, but there it’s not running by default and has no GUI. You can of course enable/configure it from the shell, but just to make things simple, there is a free app you can download that will make configuring and starting the TFTP service simple (the app is just a configuration utility, so it does not need to run once you are finished, so adds no overhead). You can grab the app over here.
Files Served By TFTP
I was installing openSUSE 11.4, so that is what my example revolves around… most Linux installations should be similar, if not identical. First, make sure you have syslinux, which you probably already have since most Linux distributions install it by default.
Copy /usr/share/syslinux/pxelinux.0 to your Mac OS X TFTP Server: /private/tftpboot/pxelinux.0
Create a directory on your Mac OS X Server: /private/tftpboot/pxelinux.cfg, and then create a file named default within that folder with the following:
Obviously change the IP address to your Mac OS X Server IP address. The autoyast part is optional and only needed if you have an auto-configuration file for YaST.
Now we want to grab two files from the installation DVD/.iso so we have the “real” kernel.
Copy /boot/x86_64/loader/linux from the installation DVD to your Mac OS X TFTP Server: /private/tftpboot/osuse11-4.krnl (notice the file rename)
Copy /boot/x86_64/loader/initrd from the installation DVD to your Mac OS X TFTP Server: /private/tftpboot/osuse11-4.ird (notice the file rename here too)
Special Options For DHCP Server
There is not a GUI for adding special options to the DHCP server, but it’s easy enough to add them manually. Just edit the /etc/bootpd.plist file, and add two keys/values to it (option 150 is the IP address of your TFTP server, option 67 is the kernel file name the TFTP server is serving):
NFS Sharing Of Installation DVD
Edit your /etc/exports file, and add the following line to it:
/images/opensuse10-4 -ro
Then just copy the contents of the entire installation DVD to /images/opensuse10-4. You can of course just do a symlink or something if you want.
Cross Your Fingers…
Hopefully if all goes well, you should see something along the lines of this when you choose to network boot a machine (for purposes of this video, I did it in a virtualized environment of Parallels so I didn’t need to boot any running servers):
So I heard that AT&T is going to start putting monthly usage caps on it’s Internet users (which is me), so it got me thinking about how much I actually pay for AT&T services…
I have AT&T fiber to my house (actual fiber to the premise), so I use U-verse for TV and Internet in my house, so it’s break this down (I’m not including any on-time costs or fees/taxes)…
Television
I pay $138/month for TV (includes set top box rentals). I only watch MAYBE 8 hours of TV per month, but it’s nice to have it when you actually want to watch it. So I pay AT&T $17.25/hour to watch TV.
Cell Phone
I pay $159.99/month for 2 cell phones. Last month I used 17 minutes (which is actually more than normal). So I pay AT&T $9.41/minute for cell phone service. Oh yeah, I also paid them $125 for a MicroCell because their service is so crappy.
Internet
I pay $55/month for Internet because I can only get 18Mbit to my house (remember… I have AT&T fiber straight into my house). I’d like to pay for the $65/month plan to give me 24Mbit, but you know… it’s hard to get more bandwidth out of fiber I guess. I really wish Verizon would buy AT&T’s fiber to my house so I could get FiOS (they offer 150Mbit down/35Mbit up already and I heard they are testing 1Gbit up/down). Meanwhile I’ll be stuck at DSL speeds with my fiber. Sweet.
Mobile Internet
Now if I want a 3G connected iPad, that would be another $25/month (and I would rarely use it… so probably would be about $25/hour).
If I want tethering enabled on one of my phones, that’s another $20… even though I already pay for an unlimited data plan on that phone. lol
It would be nice if I could just pay $10/hour to use whatever I want… TV, cell phone, cell phone tethering, iPad connectivity, etc.
I was building a website for Digital Point Ads, and for what I wanted it to do, I figured Flash would be the way to go. So it was built, and worked fine in Flash.
The problem is that I’m an anal perfectionist and the Flash object was eating 80-90% of a computer’s CPU when being viewed… and me being me, I couldn’t just be okay with that.
…so I rebuilt it with CSS3/HTML5/jQuery. It turned out much nicer in my opinion, only uses about 1% of the resources that Flash needed and as an unintentional bonus, it works on things like iPad and phones/computers without Flash support.
I’m not anti-Flash (as I said, I *wanted* to do the site with Flash), but I AM anti-inefficient. There are still some things you can’t do with HTML5 that you can do with Flash, but those things are becoming fewer and fewer it seems.
Note: This was moved from blogs.digitalpoint.com to here, because well… blogs.digitalpoint.com is no longer a sub-domain we use (user blogs were wiped when we migrated to XenForo).
I’ve heard people claim they were banned from AdSense unfairly for this, that and whatever other reason though the years… and to be honest, I just chalked it up to them doing something they shouldn’t have been and just not admitting it.
Low and behold, it *can* happen to even larger publishers (I believe we were approaching over 1,000,000,000 [yes, billion] AdSense impressions over the years). Note: I can’t confirm the exact number because, well… my AdSense account was disabled.
We get advertising inquires daily and we even go so far as directing people to AdWords and explain to them how to use Site/Placement targeting if they wish to advertise on digitalpoint.com. It’s less money for us, but in the end it’s easier and less to manage. I would guess Google has gained at least 200 NEW AdWords advertisers because of this.
The Warnings
In the last month, we received 30 warnings for running AdSense ads on non-compliant websites (gambling related). These are sites that I don’t own, have no affiliation with, nor do I know who the owner is. I have no idea why someone would want to use my AdSense publisher ID on their site, but I guess that’s beside the point really. AdSense allows you to set a whitelist of your sites just so this doesn’t cause problems. We have used this whitelist for a long time (since we first heard about it), and none of these gambling related sites were on our whitelist.
Hell, I even got an email from Google *because* I use a whitelist (and STILL didn’t turn off the whitelist function)…
Your Allowed Sites settings blocked $220 in earnings last week
We noticed that you’ve been receiving ad activity on sites which aren’t included in your Allowed Sites list. If a URL displaying your AdSense ad code is not on your Allowed Sites list, ads will still be displayed, but you won’t receive any earnings for that URL.
For your reference, sites that display your ad code, but aren’t included in your Allowed Sites list generated roughly $220 from May 2 through May 8.
My Response
After seeing a zillion of these notices coming in, I responded and let them know that these are not my sites and that I use their whitelisting feature (and that none of these sites are on my whitelist).
Sarah H. from Google’s AdSense Team responded back a couple weeks later letting me know that, “If that site or URL is not in the Allowed Sites List within your account, no further action is needed and this issue won’t negatively affect your account in any way.“. Alright… no further action is needed.
Account Disabled
Fast forward a couple days and I get this email from Google letting me know my account is now disabled because of violations of program policies… specifically, AdSense publishers are not permitted to place Google ads on sites with content related to gambling or casinos. I *still* don’t own a gambling/casino related site (nor have I ever), so I’m assuming it’s related to the 30 warnings I got in the last month for someone else trying to run my publisher ID on their sites.
While I still think the majority of people who claimed to have their AdSense account unfairly terminated are probably just whiners that got caught doing something they shouldn’t be, I can say for 100% certainty now that it can (and clearly does) happen sometimes.
I guess it’s time to finally start managing advertising in-house… Just one more thing to add to the “to-do” list. /sigh
It seems Google has quietly raised it’s limitations on how deep you can look into search results for the AJAX Search API. It’s always been 4 pages (8 results per page), so you could only see the first 32 results.
All of a sudden it was changes to 8 pages (still 8 results per page), so now you can look at the first 64 results. This is quite handy.
Someone I know uses iPhoto, but the Faces part of it wasn’t working. New pictures wouldn’t get scanned for faces and the spinning icon next to the Faces label on the navigation bar would just spin forever (and apparently it’s always been stuck like that).
I Googled the issue, and came up with hundreds of people having the same problem, but no one seemed to figure out a solution.
After a lot of digging and poking around, I think I found the solution. She imported large folders into iPhoto from her old photo management application, and she had some movies in her photo folders. So this is how I fixed it…
Quit iPhoto first.
Then get into the guts of your iPhoto Library (right-click the iPhoto Library file in your Pictures folder and do “Show Package Contents”).
Go to the Originals folder in there and get your movies out of there.
Delete the face.db and face_blob.db files.
Load iPhoto up and it will start scanning all your faces and with any luck actually get through them all and be kosher going forward.
Warning… this will delete all your face tags you had previously (so don’t do it if you tagged a bunch of people manually and don’t want to lose that).
Well actually, I’m not exactly sure how because I have Twitter followers since I don’t have a Twitter account, but apparently I have have a couple Twitter accounts…
Okay, gonna go ahead and upgrade the Digital Point Forums to vBulletin 3.8.3… Since so many of you are freaks and are on it 24/7, I made this post to keep you up to date on what’s going on (I’ll update it periodically throughout the process). You can also use this blog post as the new temporary forum for all your discussion needs while I do the upgrade. hah
3:26 am – Reading and posting seems to work… I suppose that’s good enough for now (still working on the other stuff, but don’t need to force everyone off to do it).
3:24 am – Skimming over various areas of the forum to see if things (mostly) work. I could let you guys in before the templates are fully updated possibly.
3:22 am – New moderator permissions set
3:17 am – Static CSS files located on single server in web cluster and spread around properly (I hate this about vBulletin BTW… gimme a hook location to do it please!)
3:09 am – Core upgrade done.
3:07 am – Recoded some of the upgrade scripts so they aren’t making that change to the reputation table. Will deal with issue this later instead.
3:00 am – Stupid reputation table was altered in 3.8.0 to not allow negative userids so it can accommodate 4 billion users instead of “only” 2 billion. Stupid. I used negative userids internally for some stuff. /thinking what to do about this…
2:55 am – First problem… reputation table alterations not going well. Going digging in raw database…
2:50 am – Running “ALTER TABLE pmtext” on DB servers. That’s a big one… going to get another beer.
2:49 am – Up to version 3.8.0 alpha 1
2:43 am – It’s almost 3am. If you are a cute girl reading this, please post your picture in the comments, k thanks.
2:41 am – I forgot we have to go through all versions to get to the newest… lol… going through 3.7.0 beta 4 at the moment.
2:40 am – Userlist rebuilding
2:36 am – Watching DB servers alter thread table for tagging support… (this is really boring)
2:34 am – Watching DB servers alter thread table for prefix support… /bored
2:30 am – New PHP files in place and synced across web server cluster
2:29 am – DB backup done
2:27 am – Thinking I might not have enough beer for this…
2:25 am – DB backup still running (it’s huge… many, many, many gigs)
So I’ve been waiting for this thing for awhile now, and it looks like Sony is finally going to be releasing it (even $400 cheaper than previously rumored). Yum, yum…
400 discs, serial control, Ethernet port for pulling meta data via Internet, etc… looks nice to me.
I gotta say, iMovie HD is really nice for hacking together quick videos with pretty decent quality. I threw together this video for a friend real quick for NVR Strings. Oh darn, I get to look at girls in bikinis while editing video. Sad day for me. 🙁
The original video I output was full 1920×1080 high def, but here’s the YouTube version…
Okay, so I decided not to blog for 2+ years to see if the spam blocker thing I made would work, and sure enough it did… After 2+ years, not a single spam comment got through… I’d say that was pretty good, eh?
On a better note, I have 2 years of life to write about now… stay tuned… 🙂
The comment spam on my blog here was getting to the point of just being silly… around 4,000 spam comments per day (breaks down to about one every 15 seconds 24/7).
I decided to try and do something about it so I don’t have to weed through them manually (the time consumption on this task is one of the biggest reasons I don’t post as often as I used to).
So let’s see how it works… if you see me posting more often, then you know it worked. 😉
I don’t recall how I came across this image, but I was rather surprised to see it randomly (considering it’s me)…
Some sort of heat map where people look at an image, and decide what’s the most interesting part of it by clicking somewhere on it…
So people thought my face was interesting I guess… either that or they knew their click would register a red dot and were just trying to cover my face up. 🙂
The original image was taken from this blog (over here).
Want to see some crazy stuff? In some areas of the world, you can zoom in to an absurd level. Pretty close to being able to actually identify an individual.
Let’s take this scene for example… some people in Africa with their herd of camels and cows, gathered around a well. I mean come on… you can see footprints, and some dude who must have known his picture was being taken, because he’s looking right at the camera. 🙂
Almost looks fake huh? Well you can see it for yourself here:
So like maybe Google could use some of their satellites and computing power to find Osama bin Laden and collect whatever huge reward there is for him… just a suggestion for anyone listening…
Well sorta anyway… since Apple officially announced their iPhone, Apple stock has gone up $11.14 per share. I bought 100 shares a long time ago (it’s since split twice, so now I have 400 shares). The value of my shares has gone up $4,456 since yesterday morning. More than paying the $500-600 price tag for one if I wanted one.
It does look like a pretty pimp phone, and includes pretty much everything you could ever want (Wi-Fi, movies, music, GPS, QWERTY keyboard, etc, etc.)
Too bad they will only work with Cingular and their over-priced data plans. But whatever… free is still free (sorta… heh)
I’m not much of a gamer, but I did buy an Xbox 360 last summer. Anyway, I finally found the best use for it… “beaming” your media from your Mac to it (music, photos and movies). The only annoying thing is all the music you buy from the iTunes Music Store doesn’t work since it’s DRMed with Apple’s Fairplay stuff. And since that’s the only place I get music anymore, none of the new stuff I have will work. Oh well… still cool none-the-less…
Well, I decided a long time ago that it would be neat to put a big solar voltaic system at my new place and try to generate 100% of the electricity needed for the house (and servers and everything else I use since I work from home).
So I ended up going with a system that will be able to generate 42.8kW of power (35.7kW will be the “practical” rating for it), which consists of 252 panels (each panel can general 170 watts of power). The idea is to generate more electricity than you need during the day (running the SDG&E meter backwards), then draw on it at night.
I hope the system has some sort of SNMP probing ability… would be cool to pull up a graph of power output over time.
Okay, this is what I threw out there in my first post.
Doesn’t work great with Macs (you can sync address book, calendar, email, etc. with a 3rd party program, but the program is really buggy and only works over a USB cable and not Bluetooth [yet]). It also doesn’t mount in the file system on a Mac, so you can’t copy files to it. Bluetooth file transfer from a Mac doesn’t work either. It’s ultimately a phone, so once you get crap on there, it’s good, and it does work flawlessly as an Internet gateway (via bluetooth), which is the most important part.
So I just wanted to clarify a few things after using it for a week or so…
As mentioned above, you *can* sync stuff with PocketMac, but it’s buggy as hell and certainly doesn’t have the usual Apple polished feel to it. Currently it only works with a USB cable, but they are supposed to be coming out with a Bluetooth version soon (I hope they fix the bugs while they are at it).
It turns out it *does* mount in the file system with a USB cable, but only if you have a MicroSD card installed (I threw a 2GB chip into mine for extra storage space).
Bluetooth file transfers *do* work it turns out, but it’s a bit of a kludge. You can’t just browse the device file system like you can with other phones… you can send and receive individual files though.
It’s a cool ass phone… especially considering you can get it for $50 instead of the normal $400-500 (see previous post).
A couple weeks ago my network router at home finally died (it was a Linksys WRT54G), but I certainly got my money out of it (I got it when it first came out which was probably 4 or 5 years ago). Every light on it flashed, which couldn’t be good… so I replaced it.
Long story short is I bought another one (same model) and just swapped it out with the new one (using the old power supply). Well it was doing some weirdness every 24 hours or so where the power light would flash and it would loose Internet connectivity (all other lights were fine) and if you unplugged it and plugged it back in, you were golden again. Everything I looked at online was saying a flashing power light meant the firmware was screwed up (I didn’t change anything with it). Finally I decided to look closer at the power supply… even though it was the same model, the new version runs on 12 volts and the old one ran on 5 volts.
So I guess you can run a 12 volt piece of equipment on a 5 volt power supply and it will work (for awhile anyway).
Okay, so I finally found a new cell phone that I like (enough) to get a new one…
The good:
It’s the same size as my RAZR (same thickness, 0.5″ taller and 0.25″ skinnier)
Fast internet (speed test put it at 205kbit/sec for me)
Use it as an Internet gateway for your computer (great for traveling)
Quad-band GSM (use pretty much anywhere in the world)
User interface is really nice
Pseudo QUERTY keyboard
Nice camera… good quality, decent resolution
Gmail and Google Maps applications are pretty nice (better than web based access to the services).
Support for MicroSD cards, so you can slap another 2GB of memory in it (it comes with 64MB internally).
The bad:
$400 (although with a new contract you can get it for $50 [see below])
No GPS
Camera does not shoot video
Doesn’t work great with Macs (you can sync address book, calendar, email, etc. with a 3rd party program, but the program is really buggy and only works over a USB cable and not Bluetooth [yet]). It also doesn’t mount in the file system on a Mac, so you can’t copy files to it. Bluetooth file transfer from a Mac doesn’t work either. It’s ultimately a phone, so once you get crap on there, it’s good, and it does work flawlessly as an Internet gateway (via bluetooth), which is the most important part.
I wish you could create folders for your list of applications.
If they could just make it work better with Macs (like if iSync supported it via Bluetooth and if Bluetooth file transfers worked), my only real complaint would be the lack of an internal GPS.
Where to get one for $50 (thanks to Julien for this):
It looks like Blizzard is one of the growing number of companies that have realized that peer-to-peer file sharing technology has some great uses. I just noticed they distribute World of Warcraft (full DVD and updates) via P2P technology. Cheaper because it uses the end user’s upstream bandwidth.
So I finally broke down and got a new laptop (mine old one was like 6 years old). The one I opted for was the faster 15″ MacBook Pro (2.33Ghz Intel Core 2 Duo, 2GB RAM, 256MB video RAM, etc.)
All in all, a pretty sweet machine (and I can run things like SuSE Linux [or any other x86 operating system] on it at the same time as I run OS X [without emulation]), but here’s the problem… I turned it on, created the admin account and now I find myself using my old laptop instead of the new one because the new one is just *too* nice. Like I don’t want to get any fingerprints on it or something (I don’t know why exactly). haha
Hopefully it will pass soon… otherwise I have an expensive piece of art that no one is allowed to touch. 🙂
I’m curious what people would think about someone (talking about me) simply not having an email address in this day and age…
I’m not talking about pretending to not have an email address, but ACTUALLY not having one. Like if you sent me email at my old address it would bounce back with an “unknown user” error.
I get literally thousands of emails per day (mostly spam slipping through the spam filters), of which 3 or 4 actually require some sort of response. The 2,000+ other emails each day become so overwhelming that I check my email once per week these days (it would get to my eyes faster if you snail mail me a hand-written letter), and even when I do check it, 99.99% spam means as I’m clearing it out, inevitably I may throw out some non-spam if it’s not immediately recognizable. So I’m starting to think… so I really need email? All my friends have my instant message ID, and they all know about my email situation, so really… do I need email anymore?