As the news of Stephen Hawking’s death reverberated across the internet, the r/science community on Reddit made an unusual post: An opportunity to discuss the life of one of humanity’s greatest minds. It was the first post of its kind in a subreddit normally dedicated to talking about science papers. (more…)
A couple weeks ago there was a Reddit post on /r/Android recommending Facebook users upload photos via the mobile website rather than the official Android Facebook app. The app reportedly compressed an original 8MP (4.5MB) photo to only 0.6MP (100kB) whereas the mobile website uploaded at 3MP (440kB). For a typical 4:3 ratio photograph, 0.6MP works out to neither dimension having more than 1000 pixels! Viewers on almost all current smartphones and tablets would be looking at an image smaller than their screen size. For a social network so heavily driven by photographs, you would think Facebook would do a better job maintaining some modicum of image quality. Most users probably have no idea their images are being so heavily degraded by uploading via the app. This blog examines the varying quality of Facebook image uploads in an attempt to identify the best option if you must upload to Facebook.
One of the most common images I see during science presentations is the frequency of publications within a particular field over time. It’s a great way to show the growth of the field while attempting to validate the worthiness of the research that follows. As far as I can tell, most people manually assemble this data with sequential searches on Google Scholar or Web of Science. This seemed like a straightforward opportunity for automation, so I made a little website that does just that. It takes a Google Scholar search query and a range of years and plots the number of results over time.
Several of the websites I’ve created use a background image as part of the design. It turns out that making the image stay centered, maintain the same aspect ratio, scale with the browser, and always fill the entire page is a difficult task. After several infuriating hours of trial and error, I finally figured out how to make all the above occur in a modern browser using only CSS3. Check out this JSFiddle for an example of it in action or read on for an explanation.
The first block reward halving for Dogecoin resulted in many Reddit users asking when the event was occuring and how much the new block reward would be. I haven’t been able to find a good countdown website so I decided to throw one together myself. It uses the DogeChain API to grab the current block number and estimates the time until the next change in the block reward. Since the Dogecoin protocol establishes the specific block rewards, it’s relatively simple to calculate it with some accuracy. Check it out by clicking here!
A couple years ago I made a simple Twitter Stats page to depict my tweeting activity. It was originally powered by some datasets pulled from TweetStats but I eventually upgraded it to run entirely from my own server. It was extremely barebones and grabbed my Twitter feed every hour and downloaded all the tweets that had been added since the previous update. Unfortunately, because Twitter does not offer the entire tweeting history via the website or this XML feed, I was missing well over a year of data. Combined with problems accessing this feed, I would regularly lose my entire (local) cache of my Twitter feed and have to spend a lot of time fixing everything. I eventually just decided to kill off the page since I was losing more and more of the older tweets every time I had to fix the cache and Twitter was changing the way the feed was presented.
One of the student groups I’m involved in at the University of Texas is the Biomedical Optics Graduate Organization (BOGO). We’ve recently had some changes in the leadership and I’m now the Treasurer for the group. One of the tasks I decided to undertake was an updated website that keeps better track of our upcoming events as well as helping members announce their publications and achievements. Unfortunately the webspace we’re provided by UT is very limited and pretty much can only allow for static webpages (sadly cannot use WordPress). I spent a few hours throwing together the new design and making it as easy as possible for future updating of content. Fortunately PHP is available so some of the more frustrating things to update (i.e. the list of members displayed in a table) can be automatically generated from a list of the users. I’ve also opted to use Google Calendar and Google Drive to provide functionality on the website in the form of our future events list and contact form. We’ve had some trouble in the past with incoming emails getting lost in our mailing list, so hopefully the new contact options will help alleviate that.
Well, it’s that time of year again when I randomly and decisively to completely redesign my website. I’ve decided to abandon the automatically generated Flavors.me profile in favor of a more versatile WordPress installation (again). I’m planning on making the front page a mostly static representation of my real life identity and relying Read more…