2011.10.09

A Respect for Cross Platform Developers

02.52.30 - Mark

I long time self declared geek, I'm a little surprised I've never really sat down and learned C++. I mean I've played around with a variety of programming languages, and I've had a copy of CodeWarrior for the Mac for a decade or so. So while I remember doing some "Hello World!" and tutorial work on it, I'm only now learning it between I'm taking a college course on it. While a lot of the basics are similar to the PHP and Arduino I already work with, the fact is I'm learning a bit more than I expected.

Specifically, as a happy Mac user, I'm comfortable with banging away in Apple's Xcode. Unfortunately, the course prefers Microsoft's Visual C++ Express, which no, does not play well with WINE like many other Windows apps do. So while I'm quickly picking up on the syntax of C++, defining my own rosetta stone comparing and contrasting the languages I know, I'm also working on the art of cross platform development.

Ten years ago when OS X was new and shiny and Macs still ran PPC processors, cross platform development was pretty rare. Only a few, like Adobe, Blizzard and Bungie would actually make an effort to straddle the fence. It always annoyed me that only the big (or at least Mac based ) companies would go cross platform, after all they were almost all using C, C++, maybe some PASCAL, so why not cross over? Was the Mac really that daunting?

Well, while I still don't considering the Mac daunting, translating even a "common" language can be a gauntlet. While I'm not going to claim to be a programming prodigy, it only took about an hour to read over the requirements and bang out a working program in Xcode. Add another hour to write up the documentation, and it was time to handle it on the Windows side. At which point I spent another 90 minutes trying to figure out what the windows side needed, rereading my code and googling the error codes. In the end I had repeatedly ignored the rather simple solution, one that probably should have been required on the Mac side, but the fact is, the people who manage to port software deserve a lot of respect, especially those who add linux into the mix...

Link | 0 Comments |


2010.05.28

Metapost Changes

03.36.36 - Mark

A few months ago I restarted my metapost feature, built a working version (as opposed to a nonfunctional heap of code I had when I first tried building the feature), then set up a cron to run it every day. At the time I think I was probably going to be writing more articles, talking about cool things, the way I used to run this site. If that had been done, daily bursts wouldn't have been a bad thing.

But obviously that didn't quite happen.

There are reasons why, which shall be written, just not in this little note. So since the metaposts have been showing up like crazy I decided last week to make it a weekly occurrence, rather than a random day. Methinks it'll look better with the blurbs I've been doing on them as well. Course a tweak here and redesign there, it's always adding more things to change on this site.

Link | 0 Comments |


2010.03.04

Grepping though things

02.01.09 - Mark

I've been aware of GREP for years. I sort of remember it as a feature in BBEdit Lite, the great program I really learned HTML with, but always made do with the regular, plain english search and replace commands to fix problems. That stayed true when I finally moved to TextWrangler (after it was released as freeware to fully replace BBEdit Lite).

I also knew GREP or sometimes referred to as Regular Expressions, was in Perl and PHP, saw it on XKCD, both as a comic and as a t-shirt. Probably knew it was available at the command line of OS X and Linux. Those references and bits of knowledge made me aware of it, but it wasn't until earlier this year that I was given a task that I probably could have hacked though it with traditional search and replace, but needed enough changes made that I felt it would probably take less time to learn how to use a bit of regular expressions.


Regular-Expressions.info helped me a lot, but it still took a little longer to figure out than I had guessed. However, it was well worth the effort, over and over the GREP has proven to be very helpful, and while I'm not a master of the syntax, I can do a bit of damage with it in Textwrangler without a cheat sheet.

About two weeks ago, I stepped up to using some grep in a PHP script.

I've been off and on reading Rockwood Comic for years, but it's lack of an RSS feed sort of pushed it back further. The odd thing is I remembered a bit about scraping websites to create RSS feeds. While there are plenty of tools out there that will do the same thing, part of me figured it would be simpler, more precise and up to date, and a fun little challenge to create it myself - at least for that comic site. Plus once it was kicking around in my head, I knew I'd be using GREP in PHP to dig out some of the content. Once I think it's working a bit better I'll think about writing another post on the hack, especially since this post is mostly rambling on about how wonderful a tool GREP is for geeks, and that I created an RSS feed for Rockwood Comic using PHP, GREP, and cron to write the RSS file it, all wrapped up in Feedburner to give it a prettier URL than the sandbox address my code resides at.

Link | 0 Comments |


2007.03.31

Stuff I didn't pick up in Math class

22.56.01 - Mark

The web toy I'm building is all of a sudden becoming a pain in the ass. The design is more or less done, and most of the important features work, so tonight I set up a couple dozen fake users and a script to populate it with "answers" using random numbers, almost completely forgetting that all I would get would be a perfect real world example of the law of averages. D'oh!

I don't know that that's actually going to be a major concern, the test showed me that the scripts can handle the load, but I think some of the math powering the thing isn't what it should be, and I probably need to pull out the math books I've got floating around and read up on statistics. More importantly, I should probably be developing more social tools like implementing groups. That and test it with humans instead of random numbers.

Still, would have been nice to launch the site tomorrow. April 1st is an easy birthday to remember.

Link | 0 Comments |


2007.02.17

New-ish features

15.33.16 - Mark

While part of the reason I haven't posted a lot recently is a crappy interweb connection, the other part is that I've been paying more attention to the non-content side of this blog. Tweaking code and database schemes and adding new features. Among other things I've added sidebar features and dedicated pages for books and movies I've recently consumed, and I'm fleshing out a link blog feature (again with a sidebar block as well as it's own page)

I'll probably try unifying all of this stuff next week. Boredom has done a lot to spawn feature creep.

Link | 0 Comments |


2007.01.03

Blog Newly Refined

15.22.41 - Mark

Getting bored while fixing bugs is not a good thing. Rather than simply fix the bugs in my commenting engine and clearing out the crap comments that have piled up over the last several months I started fiddling with other aspects of my blog code, adding toys like a complete tag cloud, and index pages for my better posts and my multimedia files, improved sidebars fixing a bug in my archive calender and adding a stripped down version of my tag cloud as a navigational block, as well as adding some practical features, like displaying post titles in the page headers and fixing a long list of things with the comments.

Having done almost no programming or web work in the last few months, I almost forgot how much fun it can be to churn out some code.

Link | 0 Comments |


2007.01.01

Keywords, Links, and the Kitchen Sink

16.38.32 - Mark

The comment spam problem on this blog has finally gotten to me (the database powering this thing has 60MBs of plain text spam comments!) and I'm now in the middle to testing a couple of new tools in my little war on spam. The main reason I've put it off this long is because I though it would take an entire overhaul of the comment system to even attempt to cut back the crap comments, but thankfully I was wrong.

When I started Googling for spam filtering tools, I quickly found two existing services. One called LinkSleeve, which basically looks at the links in the submitted data and compares them to its existing database. The second is Akismet, which seems to be the intimidating sentry in the field.

As it turns out neither was that hard to install into my existing system, LinkSleeve was literally cut and paste, with no modifications needed at all, while Akismet was a little more hands on, involving registering with wordpress for a free account, then researching ways to connect my code with their services. While I was able to find the rights material, it involved some programming on my part, adding a couple of calls and changing some variables around.

Since early this morning, all of the comments on this site have been evaluated with my own filtering rules, along with LinkSleeve's URL screening and Akismet's blend of filters, and the results are a bit surprising. When I added LinkSleeve I though it had the best solution, since comment spam is all about the links, I though that it would catch junk comments my filters were missing (The Hey! Cool Site. Comments that are hardest to screen), but not only does it miss most of them, it also fails to catch spam comments with a dozen obvious junk links. This may be due to a lack of users sending comments into the system, but right now its even far behind my rudimentary keyword/ip based filters.

Of course once I had Akismet set up, it blew away my existing tools, capturing the vast majority of the spam comments that have trickled in since its installation. That's not to say its perfect, but I think its safe to call it as being somewhere around 90% right now

There will however be more spam comments here for the next couple of days. While I think Akismet will be my primary tool for stopping spam, I'm probably going to continue using all three systems to catch spammers in the act, and set up a master script to direct spam to various levels of purgatory based on which filters it trips. There are going to be a few other upgrades (in addition to a significantly cleaned up database) to my little system, but I feel so much better having found a better way of dealing with the spam around here.

Link | 0 Comments |


2006.08.31

House Cleaning

02.44.00 - Mark

After letting my blog sit in its own juices for the last few months while I was at camp, it managed to get hit by comment spam pretty hard - one post had something like 327 spam comments that snuck past my (admittedly crude) filters.

The total ammount of spam I've recieved this year was close to 10MBs of data! Just for comparison all of my 1500 some blog posts takes up about 1.4MBs - 1/7th the space!

That's not to say it's all been deleted. Because I was in there doing some heavy duty cleaning I shifted some structures around to make it more manageable and I'll need to fix some of the related scripts to match it - so don't worry too much if commenting is broken over the next day or two.

On the other hand there's no doubt in my mind that spam is a serious problem, even for small bloggers using homebrew software.

Link | 0 Comments |


2006.05.08

Junkers

15.31.41 - Mark

I'm done, very happily done. I'm beaten to hell and I kind of feel like a slimy used car salesman. The site I delivered this morning in my internet projects class was finished about 5 minutes after the "client" walked in. It works, but there are plenty of bugs and broken features that are just under the surface - bugs I made a point of not going near to. It should be a little more stable that bailing wire and bubble gum, but I don't know by how much. I suppose I'll see if I ever get an email asking how in the hell to fix it - there's about as much documentation on that code as there is on academic circles within migrating schools of fish.

I non-junk news I decided to go for the CX300 headphones that I mentioned the other day. I picked up some cash on one of the websites I manage so I could afford them, I would have picked them up anyways - I went back to the white ear buds for a day and after I get my package on Wednesday, the apple earbuds are getting tossed. I'd rather not listen to music than use them again. (and for the record, I'm not an audiophile by any strech of the imagination)

Link | 0 Comments |


2006.04.07

Bug Fix Edition

10.29.19 - Mark

I've gotten around to fixing half a dozen problems in the blog, including unbreaking the comment engine (I blame the spammers), making the site usable in Internet Explorer (despite the fact that you should be using Firefox) and writing in some bottom of the page navigation so you don't need to resort to using the monthly archives.

I'm pretty sure those fix all of the complaints I've recieved in the last few weeks.

Link | 0 Comments |


2006.03.27

My Bad

09.27.57 - Mark

Late last week I had about a dozen junk posting find their way into my blog's database. They wern't comment spam, although at first glance they looked like it, with junk email addresses and the poorly spelled messages characteristic all spam seems to contain. What was a bit atypical was the spammer's address which was at my domain. It didn't hit me why this was until this morning when I had another one of these messages pop up.

I was being used to help sleezeballs in Latin America spam some poor fool's email account.

Ooops. My Bad.

The quick patch was a series of rules you need to meet before a comment is posted, and when I get around to it I'll probably put together some IP filters and email verification code as a basic spam filtering system, and then move it over to another "installation" of my blog software before spammers discover it in 3 months.

Other than the measures I can take, I kind of feel bad for the dozen or so people who have been spammed because of an exploited error in my code...

Link | 0 Comments |


2006.03.08

Through the looking glass

00.14.30 - Mark

I'm starting to bang away at the hardware hacker's recipie box idea, and as I start writing some scripts for it, it occurs to me at how complex the content management problem really is. Much more than this blog engine, where I know the sole user better than anyone else on earth and can design and code according to what's intuitive.

With this bulked up system, there are far more questions to ask and answer. How do we get the user to enter data correctly? What type of entry form is more user friendly - and whats the best way to impliment it? How do we error check all that information? How do we divide that information up? Where it is stored? Does that database scheme make sense? Will I reuse that code? How much, and what will change between calls?

Fun challenge, but it seems like I've fallen though a looking glass, and there's a pair of questions for every answer.

Link | 0 Comments |


2006.03.05

Anticlimax

04.03.56 - Mark

It still strikes me funny how everytime I wrap up a sizeable coding project, its completly anticlimatic. Maybe I've just watched Hackers, Swordfish and Antitrust a few too many times and the idea of a enmotionally charged end to a long code session is too ingrained in me for reality to make a dent in my delusions.

This isn't the Maker related project I mentioned a while back (which I'll probably start in the next day or two unless a better idea peesents itself), rather I'm dragging my Dad into online publishing, and its more fun for me to do everything from scratch than it is for me to go out install a copy of wordpress or some other prebuilt blog engine and then hack my way though the code of someone elses template.

Look for a link when there's actually some content on there, which shouldn't be too long, the first tings to go up will be some of the editorials and columns he's written (the dirty secret here is that I refuse to read the newspapers he's worked for, but still want to read his columns - the whole point of his website has been so I could get an RSS feed of his writings. If that costs me $8 a year for a domain name, I'm fine with it.)

Link | 0 Comments |


2006.02.27

Spam Pizza?

14.53.53 - Mark

Shoot. I was partially hoping that my not using a standard blog engine would keep the spammers at bay. Maybe that's true of some scripts, but not all. Two months without spam isn't bad I suppose, considering how high I've been placed in Google anymore

Looks like I'll need to work out some way to manage spam. Probably needed to do that anyways, considering I'm recycling my blog engine's code base on a couple other sites.

Link | 0 Comments |


2006.02.22

So what's in it?

22.04.05 - Mark

There's a Google hack out there called "Google Cooking" where you punch in everything you've got to cook with, and Google spits out a recipe. As a mild food geek in a house where there's never everything I need to make what I want, I've resorted to Google cooking more than once, and usually to decent results.

Recently I've been thinking about a way to document recipes that I like while being able to use the idea of Google Cooking. A quick easy way to sort recipes I like by what I have.

Like all good ideas, it seems like I'm not the only one thinking along these lines. Dave Slusher tossed out essentially the same idea for discussion on the uplifter blog, but for MAKE and DIY projects.

Coding a single user blog engine is one thing, Building a community driven site is another, but then again, I didn't know anything more than some elementary BASIC 9 months ago...

Link | 0 Comments |


2006.02.06

Inane Unsanity

09.51.31 - Mark

I am in what I hope will be my last semester at High School Univesity (name changed to protect the guilty) and one of my classes to finish this 2 bit internet tech degree is a projects class.

The main project is to create a website for a student organization on campus, and make it easy for non-technical students to edit. To me that means "no more complicated than email". Not "no more complicated than raw xhtml in a text editor".

Supposedly we are allowed to use everything that has been covered in past classes which in terms of page creation include XHTML, CSS, PHP, MySQL, scraps of Perl, and an almost useless ammount of Javascript.

Except we're not. I'm now being told that using PHP and MySQL would make the site incompatible with the (school) server that the site is to be hosted on.

I could go in a number of directions here, but the vast majority of sites out there (70%) uses a Linux, Apache, MySQL, PHP/Perl/Python configuration. That 70% also includes majority of the major sites out there, like Amazon and Google.

The minority use the Windows, IIS, ??? setup the school uses.

If they didn't want me using it, they shouldn't be teaching it. Then again, they haven't been teaching it that effectivly, so maybe I'm missing some subtle hints...

Link | 0 Comments |


2006.01.26

Site Feed Improvements

17.42.04 - Mark

The default site feed should have enclosures now. I also fixed the bug mentioned earlier. Stupid Ampersands.

Link | 0 Comments |


Blog Engine Code Updates

15.11.50 - Mark

I'm squashing some bugs in the blog code right now. Specifically the one for my categories where items with non alphanumeric characters screwed it up. This wouldn't have been a problem except I've tagged a few things with spaces. I mainly did that one for myself. I got tired of seeing spaced categories making up my error reports.

The next major changes are going to be to the RSS feed. I've made 4 links in the last 20 posts that mucked it up (which is why all my feed is showing up as raw HTML) I've found the 4 responsible URLs, so I'll start working out a fix. Depending on how complex that gets, I'll probably get around to putting in enclosures on the main feed, as well as a separate media enclosed feed.

Link | 0 Comments |


2006.01.22

Comments on Categories

03.25.19 - Mark

I figured I should make a comment about the way I'm handling categories on this site for two reasons. First, its a new feature to the blog since the move from Blogger, and secondly, because its not really structured.

Because I made this move after several hundred posts, its not very practical to implement a category system retroactively, so I've trying a free form approach. There are some categories that are very quickly coming forward, like video, computers, school, life, and stuff. However most things aren't, and as I've been making new posts here I've been treating my categories more as web2.0 tags, phrases, and slashdot like department jokes.

The links to my "category" pages have the tag relationship for the benefit of Technorati (which produces a nice tag cloud on my profile page), even the forms for the posting engine refer to my categories as "tags".
Engine's Tag form
Running though my logs I can see where it seems to confuse users, and part of that is my odd implementation. I really need to make a distinction between key onsite categories, and the tags used for web 2.0ish things. I should also fix the way links are made, since a number of categories I've make result in messy URLs, which break some of my engine's inner-workings.

I think some higher logic will emerge, but for now I think I'll keep up the odd tag/category system. Just one more thing for users of this site to be aware of. Anyways, it's late, I'm going to bed.

Link | 0 Comments |


2006.01.17

Comics Code

00.56.11 - Mark

I like my comics, and with a not so great comics page in the local paper on top of my general avoidance of newspapers in general the web has kind of come to my rescue here. Thanks to some great RSS feeds like the ones listed (or created by) Tapestry Comics and other similar services like Comic Alert.com and Interglacial's RSS feeds Unfortunately to read all my comics I might need to open up one to two dozen tabs in whatever browser I'm using. Now add in the fact I haven't been checking all of my feeds daily.

So I'm working on a personal page generator that pulls up the images and none of the extra code around it. Some places this is trivial, because they use a standard Year/Month/Day scheme. ucomics (Universal Press Syndicate's Comic Site) is one of them, and most webcomics also follow that scheme (or at least something similar). Some webcomics use a sequential counter, which presents a slight challenge, but nothing impossible.

Unfortunately, a few of my favorite comics rest at United Media's comic site, Comics.com, which, likely because of others like me, doesn't use a regular numbering scheme for the comics (the URLs are a different matter) I think that's doable, but I'll need to learn about scraping webpages (which wouldn't be the worse thing). I'm much more frustrated with King Features, who have the strictest regulations for their comics, making you pay to access their site (DailyINK.com) or require the people publishing them to really lock up the pages displaying them (javascript and blocking offsite referrals.)

Anyways, I guess I need to figure out the numbering system or learn how to page scrape. I suppose the plus side is these self motivated programming projects are teaching me a lot more about programming and development that some of my classes have. Plus, its fun. Fun is good.

Link | 0 Comments |






Previous Posts