Showing Color Chips From Sass Variables

At Causes we like to make our code more maintainable by building reusable components. Part of this strategy includes assigning variable names to hex colors using Sass. This allows us to more easily reuse the same colors everywhere, which improves consistency and makes it easier to re-color the entire site when our design needs change.

To increase the visibility of these reusable components, we’ve been building out a collection of things that designers and engineers can reference, drop in to projects, and iterate upon. The code sits next to the rendered version, allowing people to easily see the implementation required to produce the result. We call this collection the component gallery. It helps us do more with less code, be more consistent, and iterate on global changes more easily and effectively.

When we started fleshing out the color variables we wanted to use throughout the site, it seemed natural to show these colors in the component gallery as Pantone color chips. That way, designers could reference the colors that we are using, we’d have a single place to see that all of the colors look great next to each other, and engineers could easily pluck variable names when implementing designs to match the mocks.

Read on
Joe Lencioni

Hashing for the Homeless

The Legend of John Henry holds a steel driver in contest with a steam-powered hammer, casting human excellence against the progress of technology. Henry is a folk hero, but he dies betting against technology. The point isn’t that technology wins, though — we don’t really lose when technology wins.

I’m a technologist, but I’m not in it for the tech — I’m in it for making things that make things better. Code is an unreasonably effective lever. It sounds syrupy to say it, but I believe that computers are the best way for me to make the world better. They work faster and more tirelessly than I could. Bits travel the world freely, providing value to people I will never meet. Things I wrote 10 years ago are still doing useful work, and they may continue to do so after I am gone.

But there is a gap in society. Technologists are seen as wizards, in the Matrix. Sometimes they are shown as heroic, nerdy, or villainous, but always unassailably “other”. Normal people don’t do this thing. Normal people feel disenfranchised by technology. Some people feel it’s useful but don’t see themselves ever producing with it. Very few people see technology for what it really is: a tool for your use. People often suppose that tech sophistication is a function of generation — that there was a web generation, a mobile generation, that the next generation will get it better.

I disagree. Tech is changing more rapidly, not less, while our ability to incorporate the new capabilities into our practices, norms, and laws is staying constant. Each generation’s youth get a head start on incorporating new things because early in life everything is new — we have fewer bad patterns to match. But the faster the strides of tech, the more quickly youth’s head start is overtaken. But we can choose not to be John Henry.

It would help everybody if we worked to close this gap, and casting the gap as generational does real harm because it encourages waiting while we might act.

Read on
Jeremy Dunck

Overcommit: The Opinionated Git Hook Manager

At Causes, we care deeply about code quality. We promote thorough, offline code review through Gerrit and take pride in each commit we make. Due to the sheer volume of code review and the number of engineers on our team, it’s important that by the time other engineers review our code we have an established baseline of quality.

There are a few important ingredients to making a good commit:

  • Correctness: The code does what you expect it to do
  • Commit message: Tim Pope provides an excellent summary of what makes a good commit message
  • Style: The code matches our team’s coding styles
  • Test coverage: relevant tests have been run, and any new features have spec coverage

Enter overcommit. This evolved from a single file linter into a full-fledged, extensible hook architecture. Available as a Ruby gem:

gem install overcommit

What does it do? In short, it automates away all the tedium before a commit reaches code review. It ships with a set of opinionated lints that ensure a level of consistency and quality in our commits.

In Action

Here’s an example of overcommit saving me from committing janky code:

❯❯❯ echo "eval('alert(\"hello world\")');" > eval.js
❯❯❯ git add eval.js
❯❯❯ git commit
Running pre_commit checks
  Checking causes_email...........OK
  Checking test_history...........No relevant tests for this change...write some?
  Checking restricted_paths.......OK
  Checking js_console_log.........OK
  Checking js_syntax..............FAILED
    eval.js: line 1, col 1, eval can be harmful.

    1 error
  Checking author_name............OK
  Checking whitespace.............OK

!!! One or more pre_commit checks failed
Read on
Aiden Scandella

Working With Asynchronously Loaded JavaScript Objects

Telling browsers to load large JavaScripts asynchronously can significantly improve performance. This prevents the browser from blocking the rendering of the page, allowing it to be viewed more quickly. However, if you are loading dependent files asynchronously, such as a third-party service’s API, making the scripts work together is not automatic.

At Causes we use Facebook’s large (nearly 60 KiB gzipped) JavaScript API on our pages. Although they recommend loading it asynchronously, we were already putting our JavaScript at the bottom of the page and weren’t convinced that async would give us much additional benefit. However, after some non-scientific performance tests it appeared that switching to asynchronously loading the Facebook API could reduce the time to DOMContentLoaded by nearly a full second on our pages.

Read on
Joe Lencioni

10 Easy Ways to Craft More Readable CSS

Always code as if the [person] who ends up maintaining your code will be a violent psychopath who knows where you live. Code for readability. —John Woods

Diving into a large, old piece of CSS typically is neither easy nor pleasurable. I find that the biggest challenges in working with old CSS often lie in understanding the purpose and interactions of the styles.

When styling new elements, we have the entire context of the implementation immediately available, and it is easy to write styles that make sense to us at that very moment. However, in a few weeks or to a fresh pair of eyes, what made a lot of sense at first might end up being a lot more cryptic. Without a clear understanding of the purpose and interactions of the styles, modifying stylesheets can be dangerous, tedious, and cumbersome. Therefore, it is important to communicate enough context so that future developers will be able to grok the code easily and make informed decisions.

At Causes, we have adopted the following practices which we believe have improved the maintainability of our stylesheets, reduced bugs, and increased developer velocity. When you have finished reading this, I hope that you will have a few more tools to help move your codebase toward greater maintainability.

Read on

Joe Lencioni

Even Faster: Loading Half a Billion Rows in MySQL Revisited

A few months ago, I wrote a post on loading 500 million rows into a single innoDB table from flatfiles. This was in the effort to un-‘optimize’ a premature optimization in our codebase: user action credits were being stored in monthly sharded tables to keep the tables small and performant. As our use of the code changed, we found more and more that we had to do a query for each month to see if a user had taken an action. We implemented some performance optimizations (mainly memcaching values from prior months as they are immutable), but it was still overly complicated and prone to bugs. Since we had another table that was 900m rows, it seemed reasonable to collapse these shards into one 500m row table.

Since writing the last post, I’ve learned that there’s a much quicker way to combine those tables — as long as you already have the data in MySQL. MySQL allows for selecting from one table into another via the INSERT INTO .. SELECT statement:

1
INSERT INTO dest_table (values) SELECT values FROM source_table;

which might look something like:

1
2
INSERT INTO credits_new (user_id, activity_id, created_at)
SELECT user_id, activity_id, created_at FROM credits;

This shouldn’t be surprising; an ALTER TABLE on an innoDB table creates a new table with the new schema and copies the rows from the old table over to the new table.

Read on
Adam Derewecki

Loading Half a Billion Rows Into MySQL

Background

We have a legacy system in our production environment that keeps track of when a user takes an action on Causes.com (joins a Cause, recruits a friend, etc). I say legacy, but I really mean a prematurely-optimized system that I’d like to make less smart. This 500m record database is split across monthly sharded tables. Seems like a great solution to scaling (and it is)—except that we don’t need it. And based on our usage pattern (e.g. to count a user’s total number of actions, we need to do query N tables), this leads to pretty severe performance degradation issues. Even with memcache layer sitting in front of old month tables, new features keep discovering new N-query performance problems. Noticing that we have another database happily chugging along with 900 million records, I decided to migrate the existing system into a single table setup. The goals were:

  • Reduce complexity. Querying one table is simpler than N tables.
  • Push as much complexity as possible to the database. The wrappers around the month-sharding logic in Rails are slow and buggy.
  • Increase performance. Also related to one table query being simpler than N.

Alternative Proposed Solutions

MySQL Partitioning: This was the most similar to our existing set up, since MySQL internally stores the data into different tables. We decided against it because it seemed likely that it wouldn’t be much faster than our current solution (although MySQL can internally do some optimizations to make sure you only look at tables that could possibly have data you want). And it’s still the same complexity we were looking to reduce (and would further be the only database set up in our system using partitioning).

Redis: Not really proposed as an alternative because the full dataset won’t fit into memory, but something we’re considering loading a subset of the data into to answer queries that we make a lot that MySQL isn’t particularly good at (e.g. ‘which of my friends have taken an action’ is quick using Redis’s built in SET UNION function). The new MySQL table might be performant enough that it doesn’t make sense to build a fast Redis version, so we’re avoiding this as possible premature optimization, especially with a technology we’re not as familiar with.

Read on
Adam Derewecki