Blasting a Camera into Space

Shortly before I left the US in June 2017, me along with former coworkers worked on our very own space balloon project. Our goal was to send a video camera up to 100,000 feet via a Helium balloon and shoot video footage of the journey.

It took three attempts for us to succeed, and we were blown away when we first saw the footage. Our payload and camera got to an altitude of almost 100,000 feet (~30.5km) - roughly three times the altitude that a commercial airplane usually cruises at.

On the first trial we miscalculated the amount of Helium to bring and pump into the balloon, so unfortunately it wasn't enough to carry our payload. Being the software engineers that we were, we weren't being as careful as we should with how things weigh. Our radar reflector, for instance, was made of cardboard and shiny gift paper. We started glueing the gift wrap onto the card board in the car on our way to our launch site, and of course, we didn't account for the weight of the glue in our equations. The glue made the radar reflector extra extra heavy.

For our second trial, we re-engineered our radar reflector to be light - really, really light. Instead of a thread we used floss, and instead of card board we used a paper-bag with tin foil inside. We were on the launch site the second time though, the nozzle of the helium tank we rented was broken (sigh). We quickly went to a store nearby, found a replacement, and we were good to go.

I must say, weather-wise it was hot. Like, really, really, really hot! Nick was debugging the board he built for collecting sensor data in the extreme heat.

Last, but definitely not least, here's the awesome team that made it happen:

Team Space Dust: (left to right) Carlos Gasperi, Nishanth Alapati, Sai Pinapati, Nick Pisarro, (bottom center) Islam El-Ashi, and (bottom bottom) our shitty radar reflector that we used in the first trial.

Breaking Aljazeera’s CAPTCHA

I was on Aljazeera Arabic's website the other day and, as I was voting on a poll, was presented the following screen:

The CAPTCHA in the screen above immediately caught my attention. The distortions in it seemed very simple, the text was not warped in any form and no overlap between characters.

The following is a URL for one of the CAPTCHAs:

http://www.aljazeera.net/Sitevote/SiteServices/
Contrlos/SecureCAPTCHA/
GenerateImage.aspx?Code=EANmyyXghpajFhOX6rCRKQ==&Length=4

Opening the URL above and refreshing the page a few times gives the following CAPTCHAs:







The dashed grey lines are randomized, while the letters in the CAPTCHAs above are static. The letters are encoded in the Code parameter in the URL. Notice that there are two forms for each character; a straight form and another that is slightly rotated.

Aljazeera's CAPTCHA can easily be broken by doing the following:

  1. Removing the dashed grey lines
  2. Finding the characters in the image
  3. Separating the characters in the image
  4. Classifying each character

I'll be using Octave/Matlab for the above tasks and will be explaining my algorithm using the following CAPTCHA as an example.

Continue reading “Breaking Aljazeera’s CAPTCHA” »

El-Tetris in HTML5. See it in action!

Following up on my previous post on the El-Tetris algorithm, a Tetris player that clears 16 million rows on average per Tetris game and, at the time of this writing, is the most performant one-piece Tetris AI out there, I thought I would provide an implementation, rather than just a description of the algorithm.

This algorithm is implemented fully in Javascript and the rendering is done in HTML5 canvas. The rendering is purely for cosmetic reasons (so you can actually see how the game is progressing). If you're only interested in the final score, you can choose to speed up the game by enabling "Hardcore Mode". In that mode, rendering the board will be disabled and the algorithm will run continuously in the background. You can also change the size of the board; the smaller the board, the shorter the game.

Full source code can be found here.

Note: For faster execution, use Google Chrome.

inFormed – A LinkedIn Hackday Project

Last Friday I participated in the LinkedIn Intern Hackday event that was hosted at LinkedIn's headquarters in Mountain View. I joined my classmates from Waterloo Michael Truong, Kenneth Ho and Sumit Pasupalak.

We started a project dubbed "inFormed". The aim of the project is to raise awareness on global issues around the world. Currently, it's a Firefox plugin. As you browse the web, it will analyze the content of the page you are browsing and, based on that content, will show a fact, or a statistic, that is both relevant to the content of the page and related to a global issue. Along with that, it will provide a link to a charity where you can donate and/or get involved.

For example, if you are buying a book online or browsing an educational site, you would see, at the bottom right-hand corner, something like this:

Have a look at the screenshots below for some more examples. Take a close look at the fact displayed at the bottom right-hand corner and notice how it's related to the content of the page.

[nggallery id=informed_screenshots]

To summarize, the goals behind inFormed are the following:

  • Help you stay informed on global issues around the world.
  • Facilitate how you can be involved by providing links to related charities.
  • Provide a seamless and an uninstrusive user experience.

Behind the scenes, inFormed sends the URL of your current page to the server where we fetch the content of that page, extract the text, and run it through a Naive Bayes classifier to select what is likely to be the most relevant fact or statistic on that page, and feed that back to the browser.

This event is the first hackathon we ever participate in, and we are well proud to have made it to the final round! We didn't win the event, but were extremely impressed at the quality of the projects that people presented.

We had some votes on twitter as well:

 

inFormed will need a little more work to be ready to publish. Should we invest the time in doing so? Would you use it? Let us know!

El-Tetris – An Improvement on Pierre Dellacherie’s Algorithm

Update: Full source code and implementation now available here!

This algorithm is part of a project I was working on last term as part of my Artificial Intelligence class. This algorithm, which I will be referring to as “El-Tetris”, is an algorithm for playing Tetris by inspecting only one piece at a time (as opposed to two or three pieces in some variations of the game). It is based on Pierre Delacherie’s Tetris algorithm, which is known as one of the best one-piece Tetris playing algorithms.

Before discussing the details of the algorithm, let’s briefly look into how you, a human player, would play tetris.

When you play tetris, you are faced with two decisions every time you are given a tetris piece:

  1. Where to position the piece
  2. Which orientation of the piece to play

Naturally, you want to eliminate as many rows as possible and maximize your score. To accomplish that, you would (subconsciously) be doing the following:

while the game is not over:
  examine piece given
  find the best move possible (an orientation and a position)
  play piece
repeat

What would be the best possible move then? You would usually try to eyeball certain features to help you determine that. You might, for example, ask yourself these questions:

  • If I were to play the move, would that create holes in the game board?
  • If I were to play the move, how many rows would I clear?
  • If I were to play the move, what would be the height of the highest column?
  • … and so on

That’s exactly what El-Tetris does. For every given piece, it evaluates every possible orientation and position against a set of features. The move with the best evaluation is the one that is played.

Continue reading “El-Tetris – An Improvement on Pierre Dellacherie’s Algorithm” »

Live Notes

For the past few months I have been involved with the project BigBlueButton, an open-source web conferencing system. That, along with looking into Etherpad's source code, really ignited my interest in real-time collaboration technologies.

I started an open-source project to extend BigBlueButton with real-time document collaboration to the conference's participants. The project is still at a very early stage, but will be out for beta testing in the next release of BigBlueButton.

Before I start ranting about the project, which I am tentatively and temporarily calling it "Live Notes", let me first show you a demo. Continue reading “Live Notes” »