Increasingly, browsers are taking on a central role in our daily lives. With web apps for everything, we have placed our most intimate data on online services such as Facebook, Amazon or GMail. This move to online services has required that we up the security of our online services, and due diligence has brought us HTTPS-only sites, two-factor authentication, and so on. But there’s still a weak link in the chain: a rogue browser extension can impair all of those security measures.
It seems like most people are unaware of how big of an attack vector browser extensions have become. They’re still quite unregulated territory, and although there are inherent limits to what they can do, there exists little to no protection against extension malware — your antivirus can’t help you here.
In this post, I’ll share what I have found by investigating one such malware extension that a friend of mine was infected by. I’ve hesitated a lot about publishing all of the code, but have finally decided against it; I would never want to help propagate malware. However, I still want to show how this malware functions, so I’ll be posting extracts of the code in this post. I’ve taken the liberty to remove some lines that were irrelevant to the point I was making, but everything else is really as I have found it.
I suffer from the most common responsive issue. As my recent post history may attest to, performance matters to me. At the same time, though, I also want my images to look great on every screen, and that’s not as trivial as it may sound. For a long time, it’s been impossible to have high quality images of minimal size on every screen. This classic problem is just being solved right now by the Responsive Issues Community Group, but the solution isn’t quite ready for prime time yet.
Responsive images, a relatively immature feature
Indeed, as of this writing, CSS
image-set only has 62% browser support and is still very much an editor’s draft. The
srcset attribute isn’t much better, clocking in at 67%, and
<picture> is at a dismal 57%.
Now, these new specifications are backwards-compatible (as in, they won’t break your site), so you could argue that mediocre support is an invalid concern. But support is mainly lacking in the very browsers that actually need this spec. As of right now, there are no phones out there that support this specification in their default browser. So why even bother?
I really like using static site generators. I guess the computer scientist in me likes optimized systems, and that’s exactly what I get here: static sites make for a secure, performant and simple setup. It doesn’t get much more basic than serving static files with Nginx. It’s rather hard convincing myself to manage a big PHP framework and an SQL database just to show some blog posts, but I am painfully aware of how much easier to use Wordpress is for the users. Compare Wordpress’ workflow to that of a static site: even though making changes to my Jekyll site may seem rather easy to me, it really isn’t that straightforward. Here’s how I’ve done it up until this point:
- Write a post in Markdown
- Optimize images manually
- Commit changes to GitHub
- Build the site on my local machine
- Compress generated HTML and CSS files
- Manually transfer the changed files to the server
Good luck trying to convince your clients to use a static site if this is what it takes for them to do a simple task, like fixing a typo. Getting your content online requires knowledge about Markdown, compression, git, command line, and file transfer. That’s a very steep learning curve if you aren’t very technically inclined. It’s also a rather tedious process. What if we could reduce it to one or two steps?
In my previous post on Web Performance 2.0, I wrote:
Cache, compression and CDNs are still relevant, and should be used.
In retrospect, this was a bit of a hypocritical sentence, since I was only doing one out of the three on this site, namely compression. Today, we’ll be taking a look at web caching. It’s not too hard to put it into place, but it is easy to mess up, so I’ll try to proceed with care.
I run a fully static site hosted on a DigitalOcean droplet with Nginx, so luckily for me, I just have to mess with some config files. However, like any other good university student, I studied the theory long and arduously before I could ever dream of touching those files.
All right, that’s a lie, I totally just dived headlong into my
nginx.conf and googled stuff as I went. Still, let’s be smarter than I was, and take a minute to look at the theory.
There are a number of HTTP headers that give the browser instructions on how to cache a website. As always when you’re working with the Web, for historical reasons, it’s far from simple or elegant. There are quite a few headers to set, and they often overlap. Here are the cache headers that you’ll probably have to consider:
The web is changing. We’ve been calling it Web 1.0, then 2.0, 3.0, 4.0, 5.0… Yet to this day, no one really knows what any of the above is supposed to mean! To me, arbitrarily assigning version numbers to the web is sensationalism at best, it isn’t based on anything tangible. And yet, I’m going to talk about something that I actually believe in: Web Performance 2.0.
A couple of week ago, I went to Paris Web, a conference about the Web, held in (you guessed it!) Paris. I heard lots of talks, and coming back from it, there’s a lot of food for thought. One talk stuck with me, though: in his WebPerf 2.0 talk, Stephane Rios called for a new, metaphorical version 2.0 of Web performance.
Why call it 2.0?
Unlike “Web X.0”, the name “Web Performance 2.0” isn’t completely unfounded. It still sounds a bit buzzword-y to me, but I get the idea. In this case, the version number is a bit more coherent, since it is also that of the latest version of HTTP. The HTTP protocol is so indistinguishably tied to how we do performance that it’s almost acceptable to say “Web Performance 2.0” instead of “HTTP/2 Web Performance”. This new version of HTTP will introduce major changes in how we deal with Web performance, so the number 2.0 is actually based on something concrete.
I’ve long been resistant to Sass. To me, it seemed like a complicated and superfluous layer of abstraction that would get in the way of how I usually write my CSS, and perhaps even create bloated, inefficient code — boy, was I wrong.
As it turns out, Dan Cederholm had the exact same fear as I did about having to change the way he writes CSS, but the introduction to his book persuaded me to take a look at it:
But remember, since the SCSS syntax is a superset of CSS3, you don’t have to change anything about the way you write CSS. Commenting, indenting, or not indenting, all your formatting preferences can remain the same when working in .scss files. Once I realized this, I could dive in without fear.
Dan Cederholm, Sass for Web Designers (Chapter 1), 2013
Now, I’ve been using Jekyll, the static site generator, for close to a year now, even for super simple sites. I really like how I’m able to keep my HTML DRY by using imports, variables and layouts; it’s a system that makes any and all edits incredibly easy and sensible. In a sense, Sass is just the equivalent of Jekyll for CSS: I can import CSS from other files, use variables, and inject code into my predefined mixins, just like I can with HTML in Jekyll. And it turns out that my CSS doesn’t only get more maintainable, but I’ve even found that my design as a whole gets better! Here’s how Sass has helped me out:
I happened to stumble across a quote the by Jeh Johnson, Secretery of Homeland Security, that was almost uncannily related to something that I had read the very same day.
TRANSLTR [The NSA’s supercomputer] was a success. In the interest of keeping their success a secret, Commander Strathmore immediately leaked information that the project had been a complete failure. […] Only the NSA elite knew the truth - TRANSLTR was cracking hundreds of codes every day. […] To make their charade of incompetence complete, the NSA lobbied fiercely against all new computer encryption software, insisting it crippled them and made it impossible for lawmakers to catch and prosecute criminals.
Dan Brown, Digital Fortress (Chapter 4), 1998
The current course we are on, toward deeper and deeper encryption in response to the demands of the marketplace, is one that presents real challenges for those in law enforcement and national security.
Let me be clear: I understand the importance of what encryption brings to privacy. But, imagine the problems if, well after the advent of the telephone, the warrant authority of the government to investigate crime had extended only to the U.S. mail.
Our inability to access encrypted information poses public safety challenges.
In fact, encryption is making it harder for your government to find criminal activity, and potential terrorist activity.
Jeh Johnson, Secretary of Homeland Security, RSA Conference, 2015
Note: Please read the full source; in all fairness, the above quote is taken out of context. I don’t necessarily believe in any conspiracy theories about secret NSA supercomputers (although…), or about Dan Brown having predicted things 15 years in advance; I just found it quite amusing to read these two quotes within the same day!
Yesterday, the headlines on many tech sites were all about the new Pebble Time smartwatch. Some places were content with posting the facts, but a lot of people recognized that the launch was a forecast of what to expect of the future: WIRED and Engadget saw it as a tipping point for Kickstarter, and Mashable stated that it basically ensured the success of wearables in the future.
Yet I think that there’s more to it than just that.
The Pebble Time Watch introduces a new paradigm of how we should interact with our devices. In a sense, previous Pebble OS versions were not unlike a smartphone: a lockscreen (watchface) behind which a list of downloaded apps is located. The user has to exit the lockscreen in order choose the correct app to access the functionality that they want; this is the old interaction paradigm.
In a world where things are increasingly being automated, it’s surprising that this simple dynamic hasn’t been automated yet. It’s surprising that we need to create our own feed of content and functionality on the go.
I read a lot of articles on a daily basis. But some of them really stand out to me: they give me food for thought, great insight into very interesting subjects, or are just generally eloquently written.
I thought that I’d compile a list of some of the best articles that I’ve read here.
- Why Static Website Generators Are The Next Big Thing – Mathias Biilmann Christensen, November 2, 2015
- 1,000,000 Websites – Jacques Mattheij, July 24, 2015
- What one may find in robots.txt – Thiébaud Weksteen, May 17, 2014
- Cool URIs don’t change – Tim Berners-Lee, 1998
A friend and I decided to take part in the IEEEXtreme 8.0 24-hour programming challenge a few weeks ago. Optimistic as it may have been for two freshmen undergrads, I think that we did alright!
The hard part wasn’t even the lack of sleep. I still felt good Saturday evening, after almost 50 hours without sleep. It was a learning experience, for sure. I wanted to write down what I had drawn from it.
You may code up the craziest hacks, but in the end, having specific knowledge of different algorithms is what will make you win sweet points. I’ve been meaning to read a book called Algorithms in Python, but haven’t had the time to do so until now.
At some point, I got stuck for good. But just explaining the problem to my teammate made it clear to me, and I figured it out on my own (that’s also why I’m writing this post - what I’ve learned from the competition suddenly becomes more concrete).
A lot of our problems were caused by the input. Though it was specified that all input exactly would match the description, we found out that that should be taken with a grain of salt. I wasn’t checking the input at all (that’s a no-no!) because I had been told that it wold be perfect, but they were actually adding a newline and a space at the end of their input, which messed with my script. I didn’t have access to the error codes, so I really had to solve that blindly.
But all in all, it was a lot of fun, and I think that it will get even better as I gain more knowledge about the algorithms that I need to implement versions of. I’ll be there next year!
We’re learning Java at the university. I usually feel like I have to test my code every few lines as I’m writing it, but today I was able to write 200 lines of code, compile it and have it just work. That is definitely a first for me (in Java, at least).
It’s not technically hard, really. What is hard is changing my programming habits. But I’m glad to see that I’m able to, even on small details like this.
For the last few weeks, I’ve been working on building a MAME arcade machine from an old computer. The software part is done (I’ll cover that in another post), and I have started to prepare the hardware. To save some money, I decided not to buy the I-PAC. Instead, hacking a keyboard seemed cheaper, and I just happened to have a lot of old, unused keyboards at hand. I thought that I’d just connect some wires from the keyboard to a button; when the button is pressed, it would emulate a keystroke. The only problem was that it would take up too much space, and it wouldn’t be practical. The connections wouldn’t be optimal, the keyboard might fall… All in all, a better approach was possible, and much needed.
If you open up a keyboard, you’ll often find a small PCB connected to a USB cable (sometimes to some LEDs too), a rubber sheet with dots on it, and to three plastic sheets:
I’ve just set up my Jekyll blog using Github Pages! First thoughts? “Huh, well that was easy.” Full instructions are here, but in a nutshell, here’s how it goes:
1. Fork a preset Jekyll repo
And once you have your own repo, go to the Settings and rename it to
2. Edit a few settings
You just need to tell Jekyll what your name is, what your blog will be called, give it links to whatever social media that you want to link to… To do that, go through the
_config.yml file, and fill it out like a form.