I may also feel the need to spout-off about my other interests, including chess, acoustics, and music. So, feel free to drop me a line to tell me how much you think this site sucks!
Some Lite Reading...
- 3-D Printed "Iron Man" Prosthetic Hands Now Available For Kids
PC World (drawing on an article from 3DPrint.com) notes that inventor Pat Starace has released his plans for a 3-D printable prosthetic hand designed to appeal both to kids who need it and their parents (who can't all afford the cost of conventional prostheses). The hand "has the familiar gold-and-crimson color scheme favored by Ol' Shellhead, and it's designed with housings for a working gyroscope, magnetometer, accelerometer, and other "cool sensors", as well as a battery housing and room for a low-power Bluetooth chip and charging port." It takes about 48 hours in printing time (and "a lot" of support material), but the result is inexpensive and functional.
Read more of this story at Slashdot.
- If You're Connected, Apple Collects Your Data
fyngyrz (762201) writes It would seem that no matter how you configure Yosemite, Apple is listening. Keeping in mind that this is only what's been discovered so far, and given what's known to be going on, it's not unthinkable that more is as well. Should users just sit back and accept this as the new normal? It will be interesting to see if these discoveries result in an outcry, or not. Is it worse than the data collection recently reported in a test version of Windows?
Read more of this story at Slashdot.
- In UK, Internet Trolls Could Face Two Years In Jail
An anonymous reader writes with this news from The Guardian about a proposed change in UK law that would greatly increase the penalties for online incivility: Internet trolls who spread "venom" on social media could be jailed for up to two years, the justice secretary Chris Grayling has said as he announced plans to quadruple the maximum prison sentence. Grayling, who spoke of a "baying cybermob", said the changes will allow magistrates to pass on the most serious cases to crown courts. The changes, which will be introduced as amendments to the criminal justice and courts bill, will mean the maximum custodial sentence of six months will be increased to 24 months. Grayling told the Mail on Sunday: "These internet trolls are cowards who are poisoning our national life. No one would permit such venom in person, so there should be no place for it on social media. That is why we are determined to quadruple the six-month sentence.
Read more of this story at Slashdot.
- Gigabit Cellular Networks Could Happen, With 24GHz Spectrum
An anonymous reader writes A Notice of Inquiry was issued by the Federal Communications Commission (FCC) on Friday that focuses research on higher frequencies for sending gigabit streams of mobile data. The inquiry specifically states that its purpose is to determine "what frequency bands above 24 GHz would be most suitable for mobile services, and to begin developing a record on mobile service rules and a licensing framework for mobile services in those bands". Cellular networks currently use frequencies between 600 MHz to 3 GHz with the most desirable frequencies under 1 GHz being owned by AT&T and Verizon Wireless. The FCC feels, however, that new technology indicates the potential for utilizing higher frequency ranges not necessarily as a replacement but as the implementation necessary to finally usher in 5G wireless technology. The FCC anticipates the advent of 5G commercial offerings within six years.
Read more of this story at Slashdot.
- Soda Pop Damages Your Cells' Telomeres
BarbaraHudson writes Those free soft drinks at your last start-up may come with a huge hidden price tag. The Toronto Sun reports that researchers at the University of California — San Francisco found study participants who drank pop daily had shorter telomeres — the protective units of DNA that cap the ends of chromosomes in cells — in white blood cells. Short telomeres have been associated with chronic aging diseases such as heart disease, diabetes and some forms of cancer. The researchers calculated daily consumption of a 20-ounce pop is associated with 4.6 years of additional biological aging. The effect on telomere length is comparable to that of smoking, they said. "This finding held regardless of age, race, income and education level," researcher Elissa Epel said in a press release.
Read more of this story at Slashdot.
- NASA Cancels "Sunjammer" Solar Sail Demonstration Mission
An anonymous reader writes "Space News reports that NASA has cancelled its solar sail demonstration mission (also known as Sunjammer) citing "a lack of confidence in its contractor's ability to deliver." "Company president Nathan] Barnes said that in 2011 he reached out to several NASA centers and companies that he believed could build the spacecraft and leave L'Garde free to focus on the solar sail. None of those he approached — he only identified NASA's Jet Propulsion Laboratory in Pasadena, California — took him up on the offer. Rather than give up on the opportunity to land a NASA contract, L'Garde decided to bring the spacecraft development in house. It did not work out, and as of Oct. 17, the company had taken delivery of about $2 million worth of spacecraft hardware including a hydrazine tank from ATK Space Systems of Commerce, California, and four mono-propellant thrusters from Aerojet Rocketdyne of Sacramento, California."
Read more of this story at Slashdot.
- Brain Patterns Give Clues To Why Some People Just Keep Gambling
Research from several UK universities , as reported by Time, indicates that the brain activity of compulsive gamblers shows a marked difference in response to pleasure-triggering behavior, which may help explain why they have trouble stopping: [The participants] took an amphetamine capsule, which unleashes endorphins with similar effects to the rush you get from exercise or alcohol, the study says. An additional PET scan revealed that pathological gamblers responded differently to the drug. They released fewer endorphins than those who didn't gamble, and they also reported lower levels of euphoria on a questionnaire afterward. This might help explain the addictive part of pathological gambling: to get pleasure from the act, problem gamblers might need more of it or to work harder for it.
Read more of this story at Slashdot.
- Watch Comet Siding Spring's Mars Fly-By, Live
From the L.A. Times, and with enough time to tune in, comes this tip: Comet Siding Spring's closest approach to the red planet will occur at 11:27 a.m. [Pacific Time] on Sunday. At its closest approach, the comet will come within 87,000 miles of Mars. That's 10 times closer than any comet on record has ever come to Earth. Sadly, this historic flyby is not visible to the naked eye. People who live in the Southern Hemisphere have a shot at seeing the comet if they have access to a good telescope six inches or wider. However, most of us in the Northern Hemisphere will not be able to see the comet at all, experts say, no matter how big a telescope we've got. Here to save the cometary day is astronomy website Slooh.com. Beginning at 11:15 a.m PDT on Sunday, it will host a live broadcast of the comet's closest approach to Mars, as seen by the website's telescopes in South Africa and in the Canary Islands. Later in the day, beginning at 5:30 p.m. PDT, Slooh will broadcast another view of the comet from a telescope in Chile.
Read more of this story at Slashdot.
- Ask Slashdot: Good Hosting Service For a Parody Site?
An anonymous reader writes "Ok, bear with me now. I know this is not PC Mag 2014 review of hosting services. I am thinking of getting a parody website up. I am mildly concerned about potential reaction of the parodee, who has been known to be a little heavy handed when it comes to things like that. In short, I want to make sure that the hosting company won't flake out just because of potential complaints. I checked some companies and their TOS and AUPs all seem to have weird-ass restrictions (Arvixe, for example, has a list of unacceptable material that happens to list RPGs and MUDS ). I live in U.S.; parodee in Poland. What would you recommend?"
Read more of this story at Slashdot.
- No More Lee-Enfield: Canada's Rangers To Get a Tech Upgrade
ControlsGeek writes The Lee-Enfield .303 rifle is being phased out for use by the Canadian Rangers, a Northern aboriginal branch of the Armed Forces. The rifle has been in service with the Canadian military for 100 years and is still being used by the Rangers for its unfailing reliability in Arctic conditions. If only the hardware that we use in computers could have such a track record. The wheels turn slowly, though, and it's not clear what kind of gun will replace the Enfields.
Read more of this story at Slashdot.
- Apple Doesn't Design For Yesterday
HughPickens.com writes Erik Karjaluoto writes that he recently installed OS X Yosemite and his initial reaction was "This got hit by the ugly stick." But Karjaluoto says that Apple's decision to make a wholesale shift from Lucida to Helvetica defies his expectations and wondered why Apple would make a change that impedes legibility, requires more screen space, and makes the GUI appear fuzzy? The Answer: Tomorrow. Microsoft's approach with Windows, and backward compatibility in general, is commendable. "Users can install new versions of this OS on old machines, sometimes built on a mishmash of components, and still have it work well. This is a remarkable feat of engineering. It also comes with limitations — as it forces Microsoft to operate in the past." But Apple doesn't share this focus on interoperability or legacy. "They restrict hardware options, so they can build around a smaller number of specs. Old hardware is often left behind (turn on a first-generation iPad, and witness the sluggishness). Meanwhile, dying conventions are proactively euthanized," says Karjaluoto. "When Macs no longer shipped with floppy drives, many felt baffled. This same experience occurred when a disk (CD/DVD) reader no longer came standard." In spite of the grumblings of many, Karjaluoto doesn't recall many such changes that we didn't later look upon as the right choice.
Read more of this story at Slashdot.
- Be True To Your CS School: LinkedIn Ranks US Schools For Job-Seeking Programmers
theodp writes "The Motley Fool reports that the Data Scientists at LinkedIn have been playing with their Big Data, ranking schools based on how successful recent grads have been at landing desirable software development jobs. Here's their Top 25: CMU, Caltech, Cornell, MIT, Princeton, Berkeley, Univ. of Washington, Duke, Michigan, Stanford, UCLA, Illinois, UT Austin, Brown, UCSD, Harvard, Rice, Penn, Univ. of Arizona, Harvey Mudd, UT Dallas, San Jose State, USC, Washington University, RIT. There's also a shorter list for the best schools for software developers at startups, which draws a dozen schools from the previously mentioned schools, and adds Columbia, Univ. of Virginia, and Univ. of Maryland College Park. If you're in a position to actually hire new graduates, how much do you care about applicants' alma maters?
Read more of this story at Slashdot.
- BBC Takes a Stand For the Public's Right To Remember Redacted Links
Martin Spamer writes with word that the BBC is to publish a continually updated list of its articles removed from Google under the controversial 'right to be forgotten' notices." The BBC will begin - in the "next few weeks" - publishing the list of removed URLs it has been notified about by Google. [Editorial policy head David] Jordan said the BBC had so far been notified of 46 links to articles that had been removed. They included a link to a blog post by Economics Editor Robert Peston. The request was believed to have been made by a person who had left a comment underneath the article. An EU spokesman later said the removal was "not a good judgement" by Google.
Read more of this story at Slashdot.
- Canada Will Ship 800 Doses of Experimental Ebola Drug to WHO
The WSJ reports that 800 doses of an experimental vaccine for Ebola, developed over a decade at Public Health Agency of Canada’s main laboratory in Winnipeg, will be shipped to the World Health Organization in an effort to help fight the ongoing Ebola crisis in West Africa: The vaccine will be shipped by air from Winnipeg, Manitoba, to the University Hospital of Geneva via specialized courier. The vials will be sent in three separate shipments as a precautionary measure, due to the challenges in moving a vaccine that must kept at a very low temperature at all times. ... The vaccine had shown “very promising results in animal research” and earlier this week, Ottawa announced the start of clinical trials on humans at the Walter Reed Army Institute of Research in the U.S. ... The government has licensed NewLink Genetics Corp. , of the U.S., through its wholly owned subsidiary BioProtection Systems Corp. to further develop the vaccine for use in humans. The government owns the intellectual property rights associated with the vaccine.
Read more of this story at Slashdot.
- Despite Patent Settlement, Apple Pulls Bose Merchandise From Its Stores
Apple has long sold Bose headphones and speakers in its retail stores, including in the time since it acquired Bose-competitor Beats Audio, and despite the lawsuit filed by Bose against Apple alleging patent violations on the part of Beats. That's come to an end this week, though: Apple's dropped Bose merchandise both in its retail locations and online, despite recent news that the two companies have settled the patent suit.
Read more of this story at Slashdot.
- iFixit Tears Apart Apple's Shiny New Retina iMac
iFixit gives the new Retina iMac a score of 5 (out of 10) for repairability, and says that the new all-in-one is very little changed internally from the system (non-Retina) it succeeds. A few discoveries along the way: The new model "retains the familiar, easily accessible RAM upgrade slot from iMacs of yore"; the display panel (the one iin the machine disassmbled by iFixit at least) was manufactured by LG Display; except for that new display, "the hardware inside the iMac Intel 27" Retina 5K Display looks much the same as last year's 27" iMac." In typical iFixit style, the teardown is documented with high-resolution pictures and more technical details.
Read more of this story at Slashdot.
- Robot SmackDowns Wants To Bring Robot Death Matches To an Arena Near You
Business Insider profiles Andrew Stroup, Gui Cavalcanti and Matt Oehrlein, who are trying to get off the ground a robot competition league, called Robot SmackDowns. The idea, as you might guess from the name, is to showcase violence and drama to draw on the crowd-appeal of wrestling, NASCAR, and monster truck rallies: this is definitely not Dean Kamen's FIRST — it's giant mechanical beasts shooting at and otherwise trying to destroy each other. And it's not quite right to call them robots in the usual sense; they're more like mecha: "In a MegaBots battle, a two-member team sits inside the bot's upper torso, where the controls systems are housed. Although the co-founders assure me that the pilot and gunner are well protected inside, the situation presents a heightened suspense. Each 15,000-pound robot is equipped with six-inch cannons inside its arms that fire paint-filled missiles and cannon balls at 120 miles per hour. Good aim can cause enough damage to jam its opponent's weapons system or shoot off a limb." They'll be launching a Kickstarter campaign soon; according to the article, "Assuming it raises enough money to build a fleet, [the company's] plan is to take the bots on the road. They will tour the country, face off in epic battles against other MegaBots, and build a fan base. Stroup says (without giving specifics) networks have reached out and will closely watch how MegaBot, Inc.'s upcoming Kickstarter campaign performs. The possibilities for distribution seem endless, though the team is tight-lipped about the exact direction it's headed."
Read more of this story at Slashdot.
Read more of this story at Slashdot.
- Snapchat Will Introduce Ads, Attempt To Keep Them Other Than Creepy
As reported by VentureBeat, dissapearing-message service Snapchat is introducing ads. Considering how most people feel about ads, they're trying to ease them in gently: "Ads can be ignored: Users will not be required to watch them. If you do view an ad, or if you ignore it for 24 hours, it will disappear just like Stories do." Hard to say how much it will mollify the service's users, but the company says "We won’t put advertisements in your personal communication – things like Snaps or Chats. That would be totally rude. We want to see if we can deliver an experience that’s fun and informative, the way ads used to be, before they got creepy and targeted."
Read more of this story at Slashdot.
- Florida Supreme Court: Police Can't Grab Cell Tower Data Without a Warrant
SternisheFan writes with an excerpt from Wired with some (state-specific, but encouraging) news about how much latitude police are given to track you based on signals like wireless transmissions. The Florida Supreme Court ruled Thursday that obtaining cell phone location data to track a person's location or movement in real time constitutes a Fourth Amendment search and therefore requires a court-ordered warrant. The case specifically involves cell tower data for a convicted drug dealer that police obtained from a telecom without a warrant. But the way the ruling is written (.pdf), it would also cover the use of so-called "stingrays" — sophisticated technology law enforcement agencies use to locate and track people in the field without assistance from telecoms. Agencies around the country, including in Florida, have been using the technology to track suspects — sometimes without obtaining a court order, other times deliberately deceiving judges and defendants about their use of the devices to track suspects, telling judges the information came from "confidential" sources rather than disclose their use of stingrays. The new ruling would require them to obtain a warrant or stop using the devices. The American Civil Liberties Union calls the Florida ruling "a resounding defense" of the public's right to privacy.
Read more of this story at Slashdot.
- Apple's Next Hit Could Be a Microsoft Surface Pro Clone
theodp writes "Good artists copy, great artists steal," Steve Jobs used to say. Having launched a perfectly-timed attack against Samsung and phablets with its iPhone 6 and iPhone 6 Plus, Leonid Bershidsky suggests that the next big thing from Apple will be a tablet-laptop a la Microsoft's Surface Pro 3. "Before yesterday's Apple [iPad] event," writes Bershidsky, "rumors were strong of an upcoming giant iPad, to be called iPad Pro or iPad Plus. There were even leaked pictures of a device with a 12.9-inch screen, bigger than the Surface Pro's 12-inch one. It didn't come this time, but it will. I've been expecting a touch-screen Apple laptop for a few years now, and keep being wrong.
Read more of this story at Slashdot.
- Ask Slashdot: Stop PulseAudio From Changing Sound Settings?
New submitter cgdae writes Does anyone know how to stop PulseAudio/Pavucontrol from changing sound settings whenever there is a hardware change such as headphones being plugged in/out or docking/undocking my laptop ? I recently had to install PulseAudio on my Debian system because the Linux version of Skype started to require it. Ever since, whenever i dock/undock or use/stop using headphones, all sound disappears, and i have to go to Pavucontrol and make random changes to its 'Output Devices' or 'Speakers' or 'Headphones' tab, or mute/unmute things, or drag a volume slider which has inexplicably moved to nearly zero, until sound magically comes back again. I've tried creating empty PulseAudio config files in my home directory, and/or disabling the loading of various PulseAudio modules in /etc/pulse/*.conf, but i cannot stop PulseAudio from messing things up whenever there's a hardware change. It's really frustrating that something like PulseAudio doesn't have an easy-to-find way of preventing it from trying (and failing) to be clever. [In case it's relevant, my system is a Lenovo X220 laptop, with Debian jessie, kernel 3.14-2-amd64. I run fvwm with an ancient config.]
Read more of this story at Slashdot.
- Researchers Scrambling To Build Ebola-Fighting Robots
Lucas123 (935744) writes U.S. robotics researchers from around the country are collaborating on a project to build autonomous vehicles that could deliver food and medicine, and telepresence robots that could safely decontaminate equipment and help bury the victims of Ebola. Organizers of Safety Robotics for Ebola Workers are planning a workshop on Nov. 7. that will be co-hosted by the White House Office of Science and Technology Policy, Texas A&M, Worcester Polytechnic Institute and the University of California, Berkeley. "We are trying to identify the technologies that can help human workers minimize their contact with Ebola. Whatever technology we deploy, there will be a human in the loop. We are not trying to replace human caregivers. We are trying to minimize contact," said Taskin Padir, an assistant professor of robotics engineering at Worcester Polytechnic Institute.
Read more of this story at Slashdot.
- Direct3D 9.0 Support On Track For Linux's Gallium3D Drivers
An anonymous reader writes Twelve years after Microsoft debuted DirectX 9.0, open-source developers are getting ready to possibly land Direct3D 9.0 support within the open-source Linux Mesa/Gallium3D code-base. The "Gallium3D Nine" state tracker allows accelerating D3D9 natively by Gallium3D drivers and there's patches for Wine so that Windows games can utilize this state tracker without having to go through Wine's costly D3D-to-OGL translator. The Gallium3D D3D9 code has been in development since last year and is now reaching a point where it's under review for mainline Mesa. The uses for this Direct3D 9 state tracker will likely be very limited outside of using it for Wine gaming.
Read more of this story at Slashdot.
- India Successfully Launches Region-Specific Navigation Satellite
vasanth writes India has successfully launched IRNSS-1C, the third satellite in the Indian Regional Navigation Satellite System (IRNSS), early on October 16. This is the 27th consecutively successful mission of the PSLV(Polar Satellite Launch Vehicle). The entire constellation of seven satellites is planned to be completed by 2015. The satellite is designed to provide accurate position information service to users in the country as well as in the region extending up to 1,500 km from its boundary, which is its primary service area. In the Kargil war in 1999, the Indian military sought GPS data for the region from the U.S. The space-based navigation system maintained by the U.S. government would have provided vital information, but the U.S. denied it to India. A need for an indigenous satellite navigation system was felt earlier, but the Kargil experience made India realise its inevitability in building its own navigation system. "Geopolitical needs teach you that some countries can deny you the service in times of conflict. It's also a way of arm twisting and a country should protect itself against that," said S Ramakrishnan, director of Vikram Sarabhai Space Centre, Thiruvananthapuram.
Read more of this story at Slashdot.
- Personalizing Git with Aliases
Part of getting comfortable with the command line is making it your own. Small customizations, shortcuts, and time saving techniques become second nature once you spend enough time fiddling around in your terminal. Since Git is my Version Control System of choice (due partially to its incredible popularity via GitHub), I like to spend lots of time optimizing my experience there.
Once you’ve become comfortable enough with Git to
commit, and you feel like you’d like to pursue more, you can customize it to make it your own. A great way to start doing this is with aliases. Aliases can help you by providing shorthand commands so you can move faster and have to remember less of Git’s sometimes very murky UI. Luckily, Git makes itself easy to customize by setting global options in a file named
.gitconfigin our home directory.
Quick note: for me, the home directory is
/Users/jlembeck, you can get there on OSX or most any other Unix platform by typing
cd ~and hitting
return. On Windows, if you’re using Powershell, you can get there with the same command and if you’re not using Powershell,
cd %userprofile%should do the trick.
Now, let’s take a look. First, open your
.gitconfigfile (from your home directory):
~/code/grunticon master* $ cd ~ ~ $ open .gitconfig
You might see a file that looks similar to this:
[user] name = Jeff Lembeck email = firstname.lastname@example.org [alias] st = status ci = commit di = diff co = checkout amend = commit --amend b = branch
Let’s look at the different lines and what they mean.
[user] name = Jeff Lembeck email = email@example.com
First up, the global user configuration. This is what Git references to say who you are when you make commits.
[alias] st = status ci = commit di = diff co = checkout amend = commit --amend b = branch
Following the user information is what we’re here for, aliases.
Any command given in that screen is prefaced with
git. For example,
git stis an alias for
git commit. This allows you to save a little time while you’re typing out commands. Soon, the muscle memory kicks in and
git ci -m “Update version to 1.0.2”becomes your keystroke-saving go-to.
Ok, so aliases can be used to shorten commands you normally type and that’s nice, but a lot of people don’t really care about saving 10 keystrokes here and there. For them, I submit the use case of aliases for those ridiculous functions that you can never remember how to do. As an example, let’s make one for learning about a file that was deleted. I use this all of the time.
Now, to check the information on a deleted file, you can use
git log --diff-filter=D -- path/to/file. Using this information I can create an alias.
d = log --diff-filter=D -- $1
Let’s break that down piece by piece.
This should look pretty familiar. It is almost the exact command from above, with a few changes. The first change you’ll notice is that it is missing
git. Since we are in the context of
git, it is assumed in the alias. Next, you’ll see a
$1, this allows you to pass an argument into the alias command and it will be referenced there.
Now, with an example.
git d lib/fileIDeleted.js.
dis not a normal command in git, so git checks your config file for an alias. It finds one. It calls
git log --diff-filter=D -- $1. And passes the argument
lib/fileIDeleted.jsinto it. That will be the equivalent of calling
git log --diff-filter=D -- lib/fileIDeleted.js.
Now you never have to remember how to do that again. Time to celebrate the time you saved that would normally be spent on Google trying to figure out how to even search for this. I suggest ice cream.
For further digging into this stuff: I got most of my ideas from Gary Bernhardt’s wonderful dotfiles repository. I strongly recommend checking out dotfiles repos to see what wild stuff you can do out there with your command line. Gary’s is an excellent resource and Mathias’s might be the most famous. To learn more about Git aliases from the source, check them out in the Git documentation.
- This week's sponsor: Vitamin T
- Nishant Kothary on the Human Web: The Politics of Feedback
“Were you going for ‘not classy’? Because if you were, that’s cool. This isn’t classy like some of your other work,” said my wife, glancing at a long day’s work on my screen.
“Yep. That’s what I was going for!” I responded with forced cheer. I knew she was right, though, and that I’d be back to the drawing board the next morning.
This is a fairly typical exchange between us. We quit our jobs last year to bootstrap an app (for lack of a better word) that we’re designing and building ourselves. I’m the front-end guy, she’s the back-end girl. And currently, she’s the only user who gives me design feedback. Not because it’s hard to find people to give you feedback these days; we all know that’s hardly the case. She’s the only one providing feedback because I think that’s actually the right approach here.
I realize this flies in the face of conventional wisdom today, though. From VC’s and startup founders emphatically endorsing the idea that a successful entrepreneur is characterized by her willingness—scratch that: her obsession with seeking out feedback from anyone willing to give it, to a corporate culture around “constructive” feedback so pervasive that the seven perpendicular lines-drawing Expert can have us laughing and crying with recognition, we’ve come to begrudgingly accept that when it comes to feedback—the more, the merrier.
This conventional wisdom flies in the face of some opposing conventional wisdom, though, that’s best captured by the adage, “Too many cooks spoil the broth.” Or if you’d prefer a far more contemporary reference, look no further than Steve Jobs when he talked to Business Week about the iMac back in ’98: “For something this complicated, it’s really hard to design products by focus groups. A lot of times, people don’t know what they (customers) want until you show it to them.”
So which is it? Should we run out and get as much feedback as possible? Or should we create in a vacuum? As with most matters of conventional wisdom, the answer is: Yes.
In theory, neither camp is wrong. The ability to place your ego aside and calmly listen to someone tell you why the color scheme of your design or the architecture of your app is wrong is not just admirable and imitable, but extremely logical. Quite often, it’s exactly these interactions that help preempt disasters. On the flip side, there is too much self-evident wisdom in the notion that, borrowing words from Michael Harris, “Our ideas wilt when exposed to scrutiny too early.” Indeed, some of the most significant breakthroughs in the world can be traced back to the stubbornness of an individual who saw her vision through in solitude, and usually in opposition to contemporary opinion.
In practice, however, we can trace most of our failures to a blind affiliation to one of the two camps. In the real world, the more-the-merrier camp typically leaves us stumbling through a self-inflicted field of feedback landmines until we step on one that takes with it our sense of direction and, often more dramatically, our faith in humanity. The camp of shunners, on the other hand, leads us to fortify our worst decisions with flimsy rationales that inevitably cave in on us like a wall of desolate Zunes.
Over the years I’ve learned that we’re exceptionally poor at determining whether the task at hand calls for truly seeking feedback about our vision, or simply calls for managing the, pardon my French, politics of feedback: ensuring that stakeholders feel involved and represented fairly in the process. Ninety-nine out of a hundred times, it is the latter, but we approach it as the former. And, quite expectedly, ninety-nine out of a hundred times the consequences are catastrophic.
At the root of this miscalculation is our repugnance at the idea of politics. Our perception of politics in the office—that thing our oh-so-despicable middle managers mask using words like “trade-off,” “diplomacy,” “partnership,” “process,” “metrics,” “review” and our favorite, “collaboration”—tracks pretty closely to our perception of governmental politics: it’s a charade that people with no real skills use to oppress us. What we conveniently forget is that politics probably leads to the inclusion of our own voice in the first place.
We deceive ourselves into believing that our voice is the most important one. That the world would be better served if the voices of those incompetent, non-technical stakeholders were muted or at the very least, ignored. And while this is a perfectly fine conclusion in some cases, it’s far from true for a majority of them. But this fact usually escapes most of us, and we frequently find ourselves clumsily waging a tense war on our clients and stakeholders: a war that is for the greater good, and thus, a necessary evil, we argue. And the irony of finding ourselves hastily forgoing a politically-savvy, diplomatic design process in favor of more aggressive (or worse, passive-aggressive) tactics is lost on us thanks to our proficiency with what Ariely dubs the fudge factor in his book The (Honest) Truth About Dishonesty: “How can we secure the benefits of cheating and at the same time still view ourselves as honest, wonderful people? As long as we cheat by only a little bit, we can benefit from cheating and still view ourselves as marvelous human beings. This balancing act is the process of rationalization, and it is the basis of what we’ll call the fudge factor theory.”
Whether we like it or not, we’re all alike: we’re deeply political and our level of self-deception about our own political natures is really the only distinguishing factor between us.
And the worst part is that politics isn’t even a bad thing.
On the contrary, when you embrace it and do it right, politics is a win-win, with you delivering your best work, and your clients, stakeholders, and colleagues feeling a deep sense of accomplishment and satisfaction as well. It’s hard to find examples of these situations, and even harder to drive oneself to search for them over the noise of the two camps, but there are plenty out there if you keep your eyes open. One of my favorites, particularly because the scenarios are in the form of video and have to do with design and development, comes in the form of the hit HGTV show Property Brothers. Starring 6'4" identical twins Drew (the “business guy” realtor) and Jonathan (the “designer/developer” builder), every episode is a goldmine for learning the right way to make clients, stakeholders, and colleagues (first-time home owners) a part of the feedback loop for a project (remodeling a fixer-upper) without compromising on your value system.
Now, on the off-chance you are actually looking for someone to validate your vision—say you’re building a new product for a market that doesn’t exist or is already saturated, or if someone specifically hired you to run with a radical new concept of your own genius (hey, it can happen)—it’ll be a little trickier. You will need feedback, and it’ll have to be from someone who is attuned to the kind of abstract thinking that would let them imagine and navigate the alternate universe that is so vivid in your mind. If you are able to find such a person, paint them the best picture you can with whatever tools are at your disposal, leave your ego at the door, and pay close attention to what they say.
But bear in mind that if they are unable see your alternate universe, it’s hardly evidence that it’s just a pipe dream with no place in the real world. After all, at first not just the most abstract thinkers, but even the rest of us couldn’t imagine an alternate universe with the internet. Or the iPhone. Or Twitter. The list is endless.
For now, I’m exhilarated that there’s at least one person who sees mine. And I’d be a fool to ignore her feedback.
- This week's sponsor: Pantheon
- Routines Aren’t the Enemy
I recently read Greg Smith’s piece on Bocoup’s blog about how they think about time tracking, including all the fascinating data about how your brain works to solve problems. It interested me a lot, since I’ve been thinking about not just how I track projects, but also how I structure my day as a freelancer.
In addition, I read David Brook’s piece in the New York Times, where he discusses routine and creative people. I am a creature of routine, so reading that so many creative, smart people are too, gives me a bit of hope. It means that maybe, my routines are helping me actually be more productive (or at least that’s what I like to tell myself).
Routine, for me, means making sure I’m taking breaks and putting some structure around my days. Not everyone is the same, but I find that I can go strong and get “in the zone” as it were, for about three to four hours at a go. After that, I need a break. So, now that I’m mostly working remote, and doing that work in my home office, I’ve had to figure out how to make that happen.
This means that I need to work a regular work day, especially since I have a partner doing the same. I get up, eat breakfast, and then hit the office and go hard for an hour or so to organize my day. Once I know what’s ahead of me, I may take a break to shower or run, or I may dive in. But the most important thing for me: I take a lunch. I get away after a morning of work. And if it’s nice, I go outside. Then I hit it again and by 5 or so, I’m done. Maybe my brain isn’t totally spent, but I stop so that I don’t get to the overwhelmed, “what am I doing?!” phase of things.
My favorite part of the day has become when I finish work and move into cooking mode. While cooking, my brain relaxes and I process things. Sometimes solutions pop into my head as well, and I may jot those down to remember for the next morning. As the Bocoup piece discusses, when we allow our brains to process in the background, we’re giving ourselves space to “incubate” what we’ve been working on previously. My routine allows me to make space for incubation.
The other part of the routine that I’m grateful for is that it keeps me working a “normal” amount of hours and allows me to be more productive and efficient in them. By making parts of my day off limits for work, I know I need to get things done in the amount of time I’ve allotted for work. It allows for the space away from the computer, away from my home office, and away from the web. Jeffrey Zeldman discusses this in the lynda.com documentary about his work on the web, how with focused time, he’s getting more done in less time. I find the same to be true and it’s affirming to hear I’m not alone in that.
As a freelancer working on several different things at once and keeping track of details, the routine comforts me. I realize this isn’t for everyone. Most people would say that since I’m freelance I can work whenever I want, so why not take advantage? I do sometimes; a yoga class sneaks in, or I have a slow day where I can step away and do something else in the afternoon—but for the most part, my friends work 9 to 5 jobs and I want to be able to have fun with them when they’re available. In addition, I find the “work whenever you want” idea actually turns into working more because you work all the time.
This isn’t just about me being able to write the best code either. Having a routine can help with writing, getting ready for a presentation, or whatever your work may be. Even if you are going into an office, trying to block off time for focused work and other times for meetings, or breaks, or email, is just as beneficial as me making sure I’m not spending my whole life working.
Plus, if routines made it so some of the great minds could produce great works, I guess it’s not so bad to have a routine when trying to solve code and design problems.
- Training the CMS
Nothing brings content modeling to life like launching a shiny new site: teasers fit neatly without any awkward ellipses, images are cropped perfectly for different screen sizes, related content is wonderfully relevant. The content strategy comes to life, and all is right with the world.
But for years, my joy was short lived—because it would only take a couple weeks for things to begin to fall apart: teasers would stop teasing, an image would get scaled oddly, and—I won’t lie—I’d even start seeing “click here” links.
“Why are you messing this up?” I’d wonder. The content was perfectly modeled. The CMS was carefully built to reflect that model. I even wrote a detailed training document!
In my mind, I saw authors printing out my instructions and lovingly taping them to the side of their screen. In the real world, they skimmed the document once, then never opened it again. When new staff was hired, no one remembered to tell them a content guide even existed.
The problem? I’d spent months neck-deep in the content model, and knew exactly how important those guidelines were. But the authors didn’t. For most of them, it was their first time breaking content into its component parts and building it for reuse. It’s not surprising they were fumbling their way through the CMS: misusing fields, putting formatting where they shouldn’t, and uploading images that clashed with the design.
Maybe you’re like me: you know what needs to happen in the CMS to create the experience everyone’s bought into on the front end, but you’ve found there’s a big difference between having a plan and actually getting people to execute it in their daily work. The results are frustrating and demoralizing—both for you and for the authors you’re trying to help.
Don’t despair. There’s a better way to get your content guidelines adopted in the real world: put them right where they’re needed, in the CMS itself.
Getting the team together
If you’ve made a content template or page table before, the idea of an instructional content strategy document will sound familiar. Content templates act as a guide to a content model, explaining the purpose of each field and section, including information like intended audience, style reminders, and example copy. The problem is that these guidelines typically live independently of the CMS; the closest they ever come to integration is including a few screenshots of the editing interface.
Content guides are generally owned and created by whoever is in charge of content in a team. But actually gathering the guidelines is a collaborative effort: a designer contributes information about ideal photo caption length, art direction, and image sizing. A developer knows all the different places a particular field will be displayed, which file formats are accepted for upload, and how many blog posts can be promoted to the front page at once. A project owner or manager knows whom the author should contact with attribution questions, which audience a product description should target, and which voice and tone documents are relevant for each content type.
Getting content guidelines into the CMS itself requires connecting all these disciplines, which means it doesn’t fit neatly into most teams’ processes. In this article, I’ll show you how to bring all the pieces together to create guidelines that provide help when an author needs it, and make it easier for them to do their job well. We’ll do this by following three principles.
Good labels and guidelines:
- provide context, explaining what a field is for and how it will be used;
- are specific, encouraging accuracy and uniformity while eliminating guesswork; and
- are positive and helpful, rather than hostile and prohibitive.
Let’s walk through how we can apply these principles to each piece of the CMS, using specific examples and addressing common challenges.
Before throwing an author into the endless fields of an edit form, we want to give them an introduction to the overall content type: what it is, where and how it will be displayed, and who it’s for.
Let’s say authors used to create new pages for each event, and then remove them (when they remembered!) after the event ended. We’re replacing that with a specific Event content type. To help authors transition, I might include text like:
Where and how this works varies by CMS. For example, Drupal has an “Explanation or submission guidelines” field for each content type that displays at the top of every entry’s edit page. Wordpress allows you to add meta boxes to edit screens with custom code or plugins like Advanced Custom Fields, which makes the information more accessible than hiding it in the contextual help tab. If you’re not sure how to do this in your CMS, talk to your developers—chances are, they can make it possible once they understand the goal.
When naming fields:
- Be specific and descriptive. For example, in an artist profile, you might replace the default of “Title” and “Body” with “Artist Name” and “Biography.” Even when they feel redundant, field names like “Event Name” and “Event Description” help orient the author and remind them which content belongs where.
- Describe the content in the field, not the format of the field. An image field named “Image” doesn’t tell an author what kind of image. Something like “Featured Photo” is better, and best is a specific description like “Venue or Speaker Photo.”
- Be consistent. For example, don’t phrase a label as a question (“Open to the public?”) unless you consistently use questions across all your fields and content types.
Help text and instructions
Where a field name describes what content is, help text describes what it does. The goal is to help authors meet the site’s strategic, format, and style needs, and answer questions like the ones in these four categories:
Messaging and information
- What’s the underlying message of this copy?
- What does this content do, in the context of the site? Is the point of this field to inform a user, drive them to action, or provide metadata for the site structure?
- Are there things this field must include, or shouldn’t include?
- Should the alt text describe, caption, or explain the function of this image?
- Who’s the audience? Are they new to our work or familiar with our internal jargon?
Style, voice, and tone
- What grammatical structure should this text take (e.g., full sentence, sentence fragment, Three. Word. Tagline.)?
- Should the title be straightforward, or written as clickbait? (Hint: NO.)
- Should there be ending punctuation?
- Character count can be enforced by the CMS, but is there an ideal length the author should aim for?
- Are there style rules, such as acronym or capitalization usage, that are likely to come into play?
- Are you trying to change authors’ current writing habits? For example, do they need reminders not to write “click here” or reference page location like “the list to the left”?
- Which formats are allowed for an image or file upload?
- Do uploads have a size limitation?
- Should the filename follow a specific pattern (e.g., OrpingtonPoster-August2014.pdf)?
- If a field uses HTML, which tags are accepted?
- For a checkbox or select list, is there an upper or lower limit to the number of selections?
Design and display
- Does the value of this field change how or where the content is displayed? For example, does a checkbox control whether an article will be pushed to the homepage?
- Does this field display alongside other fields (and so shouldn’t be duplicative), or appear alone (like teaser text)?
- Will the CMS scale and resize images automatically, or does the author need to upload multiple versions?
- Where will this image be displayed? Will different sizes (like thumbnails) show in different places around the site?
- Are there art-direction requirements for this image? For example, does it need dark negative space in the left for an overlaid headline? Should it show a person looking directly at the camera?
Making every word count
You can’t answer all of those questions at once—no one is going to read three paragraphs of instructions for a single text field. Your goal is to highlight the most valuable—and most often forgotten—information. For example, a company that long-ago settled on PNGs for its product images doesn’t need reminders of appropriate file types. You might remind users to write in second person in a “Subtitle” field, then link off to a full voice and tone document for more guidance.
Whatever you do, use space wisely—if the field label is “Featured Photo,” don’t write “This is where you upload the featured photo.”
Beware the big WYSIWYG
Even the most well-meaning authors can be overwhelmed by a big blank box and a million WYSIWYG buttons, and the results aren’t pretty. Editorial guidelines help remind users what these long text fields should and shouldn’t be used for.
If authors will be doing any formatting, it can be helpful to customize the WYSIWYG and provide explicit styling instructions to keep them on track.
Be wary of endless “DO NOT” instructions. Positive reminders and examples of good content can be just as effective—and feel much friendlier—than prohibitions.
Making lists contextual and clear
Select fields and lists of checkboxes are part of many content types, but they’re used for a variety of different functions: a “Category” field might control where an entry is shown on the site, how it relates to other content, or even which layout template will be used for display. Good instructions provide authors with this context.
Please remember to change your lowercase, underscore-ridden, concatenated, and abbreviated machine names, like “slvrLc_wynd,” to real words, like “Silver-Laced Wyandotte.” Key:label pairs exist so that your authors don’t have to speak database to be successful. Use them.
Ordering your fields
Many CMSes will let you group fields—most commonly in fieldsets or tabs—to help authors make sense of what they’re seeing. In most CMSes, the front-end display order doesn’t need to match the backend form order, so you can organize fields to help the authors do their job without affecting how things look on the live site.
Usually, you’ll want to either group similar content fields together, or arrange fields in the order they’ll be entered.
For example, say that you need multiple versions of a single piece of information, like a short title and a long title. It’s helpful to see these side by side, with reminders about how specifically the versions should differ from one another.
Or, say that your content will be copied from another system, like a manufacturer’s specification or a legacy database. Matching your field order to the content source means that authors won’t have to skip around while creating an entry. Similarly, if your authors always enter the “Event Location” content in between the “Presenter Bio” and “Event Date” fields, the edit form should match that—even if it’s not the order that makes the most sense to you.
The developer in me wants to create a library of reusable generic help snippets, but the best instructions I produce are the ones that are specific to a particular client’s internal organization and processes. Don’t shy away from including information like “Contact Ann Sebright (x8453) for photo attribution information,” or “Check the internal calendar for date conflicts before posting a new event.”
Making it real
Every team’s workflow is different, so I can’t tell you exactly how to integrate the creation of these instructions into your projects. I can give you questions, though, so you can have productive conversations at the right times in your process.
Picking a CMS
If you haven’t selected a CMS yet, consider the following questions when evaluating your options. If the CMS has already been chosen, be aware of the answers so you can adjust your instructions strategy accordingly.
- What formats of field-level help text does the CMS support: single lines of text, paragraphs, pop-ups, hover text?
- Can the instructions include HTML? A bit of simple formatting can go a long way toward readability.
- How hard is it to update the help text? As needs change over time, will adjusting the instructions be a hassle?
- Can you change custom field labels used in the admin interface without affecting the machine name used in queries and front-end display?
Content strategists, developers, designers, and clients or subject-matter experts often work together to build content models. But it’s important to bring regular authors—not just the project leads, but the people who will actually be creating entries on the site—into the conversation as well, as early as possible.
- Review content models and field names with authors before they are finalized. Do the field names you’re using make sense to them? Do they understand the relationships between fields, and what that means for connections between pieces of content?
- Are there places where the new model differs significantly from the authors’ current conception of the content? Larger changes warrant more detailed reminders and help.
- For fields that are subtly different from one another: what kind of information will authors need to distinguish between them and use them correctly?
- If you’ve chosen a CMS with a limited ability to include help text, have you simplified your models accordingly? A model people can’t remember how to follow won’t do much for your content.
Content migration planning
When you have significant legacy content, plan for migration to be its own phase of the project. Talk about what kinds of guidelines would make moving content to the new CMS smoother.
- If blobs in the current site are being split into component chunks, position those field components near each other during migration, since they are all being derived from the same source.
- Create a set of perfect example entries for authors to consult during migration. A set of real content—especially one showing how information from the old site fits into the new model—is a valuable reference tool.
- Consider adding “migration phase” instructions and field groupings, with a separate set of “live site” guidelines to be put in place after migration is complete. The kind of reminders needed while content is being moved are not always the same as the help text for content being newly created.
Design and development
As the design and CMS take shape, designers and developers are in the perfect position to spot potential snags.
- Are any pieces of content making your spidey sense tingle? Is there author-editable imagery that has particular art direction needs? Are there site functions (e.g., “only one piece of content can be promoted to the front page at a time, and promoting a new piece will un-promote the existing content”) that you feel like you’re the only person who understands? Make note of any piece of site content that makes you nervous, and share them with your team so the guidelines address the issue.
- Who’s going to enter the help text into the CMS itself? If the instructions are more tactical, this may be something the development team can do as they’re building out the content models. The content strategist may take the lead for more editorial guidelines—in many CMSes, help text is entered through a GUI rather than in code, so its entry doesn’t necessarily need to be owned by a developer.
- Help text deserves its own QA. It’s incredibly important to see the instructions in context—there’s no other way to realize that a particular piece of text is too long or lost in the clutter, or that the field order doesn’t make sense in the form. The development and client or business teams should both review the edit forms for every content type to make sure all the important information has been captured.
Revisit your work regularly with both your team and your client or project sponsor. Adjusting the help text or rearranging the fields won’t take much ongoing time, but can make a huge difference to the quality of the author experience—and the resulting content.
- Review live pages, especially any with complex layouts. If you find images that aren’t following art direction or text that isn’t providing needed information, add more specific help text around those issues.
- Chat with the authors using the system and make adjustments based on their feedback. Is there anything annoying about the edit form? Are the fields in an order that works for them? Are there places where a link over to a style guide or intranet page would save them time? Small changes to the interface can make a big difference to the overall workflow for an author.
Setting authors up for success
I used to think it was inevitable: that a few months after launch, I’d be guaranteed to find misused fields and confusing headlines littering a site—the particular kind of chaos that arises from combining a powerful CMS with untrained site administrators. But as I’ve moved the content guidelines into the CMS itself, my post-launch check-ins have shifted away from annoyed sighs and toward small improvements instead.
When we embed instructions where they’re most relevant and helpful, we help our authors build good habits and confidence. We allow them to maintain and expand a complex site without feeling overwhelmed. A website that looks perfect on launch day is a wonderful thing. But when we improve the author experience, we improve the content forever—and that’s a whole lot more satisfying.
- Collaborative User Testing: Less Bias, Better Research
I’ve always worked in small product teams that relied on guerrilla user testing. We’d aim to recruit the optimal number of participants for these tests. We’d make sure the demographic reflected our target audience. We’d use an informal approach to encourage more natural behavior and reduce the effect of biases participants could be prone to.
But you know what we almost never talked about? Ourselves. After all, we were evaluating work we had personal and emotional involvement in. I sometimes found myself questioning, how objective were our findings, really?
It turns out, they may not have been.
In “Usability Problem Description and the Evaluator Effect in Usability Testing,” Miranda G. Capra identifies a tendency in the UX community to focus on users when talking about testing, while seldom talking about the role of evaluator. The assumption is that if the same users perform the same tasks, the reported problems should be the same—regardless of who evaluates them.
But when Capra studied 44 usability practitioners’ evaluations of pre-recorded sessions, this wasn’t observed. The evaluators, made up of experienced researchers and graduate students, reported problems that overlapped at an unexpectedly low rate—just 22 percent. Different evaluators found different problems, and assigned different levels of severity to them. She concluded that the role of evaluator was more important than previously acknowledged in the design and UX community.
If complete and objective results couldn’t be achieved even by usability professionals who were evaluating the same recordings, what can we expect from unspecialized teams planning, conducting, and evaluating user testing?
Bias is unavoidable
As people fully immersed in the project, we are susceptible to many cognitive biases that can affect outcomes at any stage of research—from planning to analysis. Confirmation bias among inexperienced evaluators is a common one. This leads us to phrase questions in a way that is more likely to confirm our own beliefs, or subconsciously prioritize certain responses and ignore others. I’ve done it myself, and seen it in my colleagues, too. For example, I once had a colleague who was particularly keen on introducing search functionality. Despite the fact that only one respondent commented on the lack of search, they finished the testing process genuinely convinced that “most people” had been looking for search.
We all want our research to provide reliable guidance for our teams. Most of us wouldn’t deliberately distort data. But bias is often introduced unknowingly, without the researcher being aware of it. In the worst-case scenario, distorted or misleading results can misinform the direction of the product and provide the team with false confidence in their decisions.
Capra’s research and other studies have shown that bias commonly occurs at the planning stage (when drafting test tasks and scenarios), during the session itself (when interacting with the participants and observing their behavior), and at the analysis stage (when interpreting data and drawing conclusions). Knowing this, my team at FutureLearn, an online learning platform, set out to reduce the chance of bias in our own research—while still doing the quick, efficient research our team needs to move forward. I’d like to share the process and techniques we’ve established.
Take stock of your beliefs and assumptions
Before you begin, honestly acknowledge your personal beliefs, particularly if you’re testing something you have “strong feelings” about. Register those beliefs and assumptions, and then write them down.
Do you think the Save button should be at the top of the form, rather than at the end, where you can’t see it? Have you always found collapsing side menus annoying? Are you particularly pleased and proud of the sleek new control you designed? Are you convinced that this label is confusing and that it will be misinterpreted? By taking note of them, you’ll stay more aware of them. If possible, let someone else lead when these areas are being tested.
Involve multiple reviewers during planning
At FutureLearn, our research is highly collaborative—everyone in the product team (and often other teams) is actively involved. We try to invite different people to each research activity, and include mixed roles and backgrounds: designers, developers, project managers, content producers, support, and marketing.
We start by sharing a two-part testing plan in a Google Doc with everyone who volunteered to take part. It includes:
- Testing goals: Here we write one to three questions we hope the testing will help us answer. Our tests are typically short and focused on specific research objectives. For example, instead of saying, “See how people get on with the new categories filter design,” we aim for objective phrasing that encourages measurable outcomes, like: “Find out how the presence of category filters affects the use of sorting tabs on the course list.” Phrasing the goals in this ways helps focus our evaluators’ (mis)interpretation.
- Test scenarios: Based on the goals, we write three or four tasks and scenarios to go through with participants. We make the tasks actionable and as close as possible to expected real-life behavior, and ensure that instructions are specific. With each scenario, we also provide context to help participants engage with the interface. For example, instead of saying: “Find courses that start in June,” we say something along the lines of: “Imagine you’ll be on holiday next month and would like to see if there are any courses around that time that interest you.”
In one past session, where participants were required to find specific courses, we used the verbs “find” and “search” in the first draft. A colleague noticed that by asking participants to “search for a course,” we could be leading them toward looking for a search field, rather than observing how they would naturally go about finding a course on the platform. It may seem obvious now that “search” was the wrong word choice, but it can be easy for a scenario drafter who is also involved in the project to overlook these subtle differences. To avoid this, we now have several people read the scenarios independently to make sure the language used doesn’t steer responses in a particular direction.
Perform testing with multiple evaluators
In her paper, Capra argues that having multiple observers reduces the chance of biased results, and that “having more evaluators spend fewer hours is more effective than having fewer evaluators spend more hours.” She notes that:
In my past experience, the same small group of people (or a single person) was always responsible for user testing. Typically, they were also working on the project being tested. This sometimes led evaluators to be defensive—to the point that the observer would try to blame a participant for a design flaw. It also sometimes made the team members who weren’t involved in research skeptical about undesirable or unexpected results.
To avoid this, we have several people oversee all stages of the process, including moderating the sessions. Usually, four of us conduct the actual session — two designers, a developer, and someone from another interest (e.g., a product manager or copywriter). It is crucial that only one of the designers is directly involved in the project, so the other three evaluators can offer a fresh perspective.
Most importantly, everyone is actively involved, not merely a passive observer. We all talk to participants, take notes, and have a go at leading the session.
During a session, we typically set up two testing “stations” that work independently. This helps us to collect more diverse data, since it allows two pairs of people to interview participants.
The sessions tend to be short and structured around the specific goals identified in the plan. The whole process lasts no more than two hours, during which the two stations combined talk to 10 to 12 participants, for about 10 minutes each.
Bias can take many forms, including the manipulation of participants through unconscious suggestion, or selection of people who are more likely to exhibit the expected behavior. Conducting testing in a public place, like the British Library, where our office is conveniently located, helps us ensure a broad selection of respondents who fit our target demographic: students, professionals, academics, and general-interest learners.
Have multiple people analyze results
Data interpretation is also prone to bias: cherry-picking findings and being fixated on some responses while being blind to others are common among inexperienced evaluators.
Analyzing the data we gather is also a shared task in our team. At least two of us write up the notes in Google Docs and rewatch the session videos, which we record using Silverback.
Most of our team doesn’t have experience in user testing. Being given a blank sheet of paper and asked to make sense of their findings would be intimidating and time-consuming—they wouldn’t know what to look for. Therefore, the designer responsible for the testing typically sets up a basic Google form that asks evaluators a series of fact-based questions. We use the following structure:
- General questions: The participant’s name, age group, level of technical competence, familiarity with our product, and occupation. We ask these questions right at the beginning, along with having people sign a consent form.
- Scenario performance: This section contains specific questions related to participants’ performance in each scenario. We typically use a few brief multiple-choice questions. Since our tests are short, we usually provide two to four options for each answer, rather than complex rating scales. Evaluators can then provide additional information or comments in an open text field.
These simple forms help us reduce the chance of misinterpretation by the evaluator, and make it easier for inexperienced evaluators to share their observations. They also allow us to support our analysis with quantitative data—e.g., how many people experienced a problem and how often? How easy or difficult was a particular task to complete? How often was a particular element used as expected, versus ignored or misinterpreted?
Using these forms, an evaluator can typically review all five of a station’s participants in about an hour. We do this as soon as possible— ideally on the same day as the sessions, while the observations are still fresh in our memories, and before we get a chance to overanalyze them.
Once evaluators submit their forms, Google Docs creates an automatic response summary, which includes raw data with metrics, quotes, performance for each task, and other details.
Based on these responses, recorded videos, and everyone’s written notes, the designer responsible synthesizes the team’s findings. I usually start by grouping all the collected data into related themes in another spreadsheet, which helps me see all the data at a glance and ensure nothing gets lost or ignored.
At this stage we look for general patterns in observed behavior. Inevitably some outliers and contradictions come up. We keep track of those separately. Since we do research regularly, over time these outliers add up, revealing new and interesting patterns, too.
We then write up a summary of results—a short document that outlines these patterns and explains how they address our research goals. It also contains task performance metrics, memorable quotes, interesting details, and other things that stood out to the team.
The summary is shared with the research team to make sure their notes were included and interpreted correctly. The researcher responsible then puts everything together into a user testing report, which is shared with the rest of the company. These reports are typically short PDFs (no longer than 12 pages) with a simple structure:
- Goals of testing and tasks and scenarios: Content from the testing plan.
- Respondents: A brief overview of the respondents’ demographics (based on the General Questions section).
- Results and observations: Based on the results summary recorded earlier.
- Conclusions: Next steps or suggestions for how we’ll use this information.
Some teams avoid investing time in writing reports, but we find them useful. We often refer back to them in later stages and share them with people outside the project so they can learn from our findings, too. We also share the results in a shorter presentation format at sprint reviews.
Keep it simple, but regular
Conducting short, light sessions regularly is better than doing long, detailed testing only once in a blue moon. Keeping it quick and iterative also prevents us from getting attached to one specific idea. Research has suggested (PDF) that the more you invest in a particular route, the less likely you are to consider alternatives—which could also increase your chances of turning user testing into a confirmation of your beliefs.
We also had to learn to make testing efficient, so that it fits into our ongoing process. We now spend no more than two or three days on user testing during a two-week sprint—including writing the plan, preparing a prototype in Axure or Proto.io, testing, analyzing data, and writing the report. Collaborative research helps us keep each individual contributor’s time focused, saving us from spending time filtering information through deliverables and handoffs, and increasing the quality of our learning.
Make time for research
Fitting research into every sprint isn’t easy. Sometimes I wish someone would just hand me the research results so I could focus on designing, rather than data-gathering. But testing your own work regularly can be one of the most effective ways to overcome bias.
The hindsight bias is an interesting example. We become more prone to thinking we “knew things all along” as we grow more experienced, and as our perception of the level of our past knowledge increases. This can lead some designers to believe that experience “reduces the need for usability tests.” The risk, however, is that our design experience can make it harder for us to connect empathetically with our target audience—to relate to the struggles they’re going through as they use our product (that’s also why it’s so hard to teach a subject you’ve gained mastery of).
According to researchers like Paul Goodwin, a professor of management science at the University of Bath, the most effective known way we can overcome hindsight bias is by continuous education (PDF)—particularly when we work hard to gain new knowledge.
Actively engaging in user testing is the most effective way of learning I know. It’s also a great way to avoid arrogance and relate to the people we are building for. Minimizing bias takes practice, honesty, and collaboration. But it’s worth it.
- Laura Kalbag on Freelance Design: Breaking Stuff
Do you know that horrible fear when you’ve broken something on a client project and you have no idea how to fix it? I do… Sometimes I’ll have been wading through templates on a site, getting it all up to scratch, then suddenly I’m faced with a page of doom—a whole page of garbled semi-English that sort of resembles an error message, but nothing I’ve ever seen before.
As a freelancer, I’ve always been proud to have the time to dedicate to learning. Keeping up with the industry, and being able to level up my skills on each new project, is very important to me.
But sometimes I struggled when I pushed myself that little bit too far. A few times I’ve had to request a lifeline from kind people on Twitter to pull me out of a hole. And then I feel a bit daft, having to admit my inadequacies on a social network in order to save myself from a worse situation.
Most of us seem to have a boundary somewhere that defines what we think we can’t do. Working for and by yourself, you are limited by your own experience and skills.
Testing the limits
For me, it was the all-powerful and uncommunicative command line. It terrified me. I thought I would probably find a way of deleting everything on my hard drive if I made a typo in the command line.
My fear of the command line was burdensome when it came to using Git. A lot of people I knew used Git on the command line, but I preferred to use a GUI tool. I found it easier to understand the concepts of staging, branches, pushing, and deploying with a visual representation of the actions.
However, when I was using Git with the rest of the ind.ie team, trying to debug issues when I’d committed files to the wrong branch, I was assisted by developers who would fire up Terminal (the command line tool) to look at the problem.
The wonderful thing about working with these developers is that they’d explain what they were doing as they went along. I wasn’t expected to sit quietly to the side until they’d used magic to fix my problem. I would stay in my seat, and they would dictate to me what I should type. Typing for myself, and understanding what I was typing, I was learning to do it for myself.
Learning from other people in this way is a rich and rewarding experience. I started being able to use Git on the command line with confidence. Having seen Andy, one of the developers I was working with, look up some of the less-obvious Git commands on the web, I suddenly didn’t feel so exceptionally useless. It gave me the confidence to do the same without feeling like I was a failure because I didn’t know all the commands by heart.
My safety net
My developer safety net made me more willing to try new things. Now, when I’d come up against intimidating error messages, I had people who could easily rescue me in five minutes, rather than having to put out a call of shame to Twitter.
But my confidence wasn’t exclusive to ind.ie. Feeling more secure in my new abilities improved my confidence in my client work. I knew I was now able to do loads of stuff with Git, so why could I not handle some of these other problems?
Stepping over the line
Job titles can be used to put us in our place. I’ve been told I’m just a designer, so I shouldn’t do development. You’ve been told you’re just a developer, so someone else will handle the design. It’s too easy to forget that we’re working on this web platform together, and a crossover of skills is incredibly valuable.
Technical problems shouldn’t just be reserved for those with technical job titles. As designers, it’s our job to be familiar with our platform. Print designers know a lot about paper and inks, and architects know about building materials and regulations. Web designers should understand their medium, even if they sometimes need a hand with the tricky stuff. Other industries have their parallels: prepress technicians and building contractors are available to help designers pick up the technical details.
Exploring the working environment
You’ll get a lot more out of your job if you don’t feel like your job title has put you in a box. It’s fun to learn new things, and explore unknown territories. Get in an environment that pushes you, but gives you a safety net. You owe it to yourself to learn more, be ambitious, be better at what you do, and strive to be the best you can be at your craft.
You wouldn’t like it if someone else said you were “just a designer,” so don’t say it to yourself.
- Before You Hire Designers
Before you hire a designer, set up the situation this person needs to be effective. Bringing any employee into an unprepared environment where they don’t have the tools or authority to succeed is unfair to them and a huge waste of your hard-earned money. It also burdens the other employees who aren’t sure what to do with this new person.
A few years ago, I made plans with a friend for breakfast. She was late. When she finally got there, she apologized, saying she’d been cleaning up for the housecleaner.
“Why in the world would you clean up for a housecleaner?!?” I asked.
“So she can actually clean, you idiot.”
This made no sense to me, but I let it go. Otherwise, we would’ve argued about it for hours. About a year later, I got busy enough with work that my house looked like it could star in an episode of Hoarders, so I hired a cleaner. After a few visits, I found myself cleaning up piles and random junk so that she could get to the stuff I actually wanted her to get to.
I called my friend and said, “I get why you had to clean up for the cleaner now.”
“I told you you were an idiot.”
(My friends are great.)
The moral of this story is you can’t drop a designer into your environment and expect them to succeed. You’ve got to clearly lay out your expectations, but you also have to set the stage so your designers come in and get to the stuff you need them to do.
Introducing a new discipline to your workplace
Let’s assume you don’t have a designer on staff. People have been going about their business and getting their work done, and now you’re introducing a designer. Even if your employees have been begging you to hire a designer, this creates a challenge. People are creatures of habit and comfort. As difficult as they claimed their jobs were without a designer, having one still means giving up control of things. This isn’t easy. All the complaining about having to do someone else’s job is about to turn into complaining about giving their work to someone else. People are awesome.
A designer will absolutely change what your company produces, and they’ll also affect how your company operates. You’ll need to adjust your workflows for this new person, as well as being open to having them adjust your workflow once they arrive.
Before you throw someone into the mix, sit the company down and explain why you’re hiring a designer, how the company benefits, and what the designer’s role and responsibilities are. Explain how adding this skill set to your group makes everyone’s job easier. (Including possibly going home earlier!) Thank them for going without a designer for so long. Talk to them about things that they no longer need to undertake because of the new designer. Tell them to expect some bumps as the designer gets integrated into the fold.
Then back your designer up when those bumps occur.
Your designer can’t do shit without support from the person up top. If their job is to go in and change the way people work, the way the product behaves, and the way people interact with each other (all of which design will do), that’s gonna ruffle a few feathers. When a colleague runs into your office and says, “The designer is changing things!” a well-placed “That’s exactly what I’m paying the designer to do” sets the perfect tone. Remember, designers aren’t out there doing it for their own well-being. They’re your representative.
As tough as introducing a designer may be, it’s infinitely easier than introducing a designer into a workplace where a bad designer has been nesting. We’re talking industrial-sized smudge sticks. I once took a job where coworkers would walk to my desk and ask me to whip up signs for their yard sales. When I informed them that wasn’t my job, they replied that the previous designer always did that stuff. I reminded them that the previous designer got fired for not meeting his deadlines. Eventually, they stopped asking. Had I been more willing to bend to their requests, we would’ve forever established that designers are the people who make yard sale signs for coworkers.
Clear the table of any shenanigans like that before your new designer starts. Delivering this message is much easier coming from you. Don’t pass it off to the new person.
Understanding what designers are responsible for
This may sound obvious: a designer is responsible for design, right? By design, I’m talking about not just how something looks, but also how it manifests the solution to the problem it solves. Remember that nice young designer who worked at a big company—the one who wasn’t invited to strategy meetings? By the time work got to him, the decisions were set down to the smallest details and all he did was execute. He wasn’t designing. He was executing on someone else’s design.
In truth, he needed to assert himself. But this chapter is about you. Design is the solution to a problem, something you pay a professional to handle. A designer is, by definition, uniquely qualified to solve those problems; they’re trained to come up with solutions you may not even see. Your designer should champ at the bit to be involved in strategic discussions.
Make sure to use your designer’s skill set completely. Make sure they’re involved in strategy discussions. Make sure they’re involved in solving the problem and not executing a solution that’s handed to them. Most of all, make sure they see this as part of their job. If they don’t, your design will only ever be as good as what people who aren’t designers think up.
Giving designers the authority and space they need
Just as it’s absolutely clear what authority your office manager, accountant, and engineers carry, make sure your company understands what authority your designer has. Let’s go ahead and extend the definition of authority to “things they own.” In the same way the bookkeeper owns the books and the engineer owns the code. (Yes, I get that technically you own it all. Work with me here.)
Trust your designers. Give them the authority to make decisions they’re singularly qualified to make. Before you bring a designer into the company, decide what authority they have over parts of your workflow or product. Do they have the last call on user-interface decisions? Do they need to get input from other stakeholders? (Always a good idea.) Do they need approval from every stakeholder? (Always a political shit show. Trust me.)
The right answer depends on the type of organization you run and the skill level of the designer. But whatever that call is, empower your designer with the maximum amount of agency to do their job well. No one tells the accountant how to do their job, but I’ve been in a hundred workplaces where people told the designer how to do theirs.
A designer with backbone and experience won’t have any problem carving out the room they need to work, but they can’t do so if you don’t grant them the authority. Otherwise, you run the risk of bringing someone in to follow the whims of those around them. That’s not a full member of the team. That’s a glorified Xerox machine, an asset used by the rest of the company whenever they need some pixels pushed around.
That’s how someone who’s supposed to work on your website’s UI ends up making Lost Cat flyers for Betty in HR.
Equipping designers with the tools they need
This should go without saying, except I once spent the first two weeks at a job spinning through a draconian requisition process to get copies of Photoshop and BBEdit, which the company considered nonessential software. Someone from IT gave me a one-hour demo on how I could harness PowerPoint to do anything I needed Photoshop for. (I know I should’ve stopped him, but at some point my annoyance faded in favor of fascination at how much he’d thought this out.)
Like any craftsperson, your designer is only as good as their tools. Make sure they have what they need. Yes, it’s fair to ask them to justify their use. No, you don’t need to understand what everything does. Trust that they do.
How well you prepare your team for a designer, how well your designer gets along with everyone, and how professionally they behave means exactly jack squat if your designer doesn’t succeed in their goals. Before bringing any employee on board, you should know how you’ll measure their success. Will it be hard metrics? Do you expect sales or conversions on the website to increase a certain number? Is the goal to deliver a big upcoming project on time and under budget?
Your business needs vary, so I can’t give you a magical equation for design success. But I can say: whatever your success metric is, make sure your designer both knows about it and has the authority to accomplish it.
I do have a story for you though. I took a contract-to-hire job once, and the creative director sat me down on my first day and told me that he wasn’t sure what to expect of me and how I’d fit in with the rest of the studio. (Someone didn’t get their house in order.) At the end of the contract period, he’d evaluate whether to keep me around. I was young and stupid, so I didn’t press much and decided to blend in as much as possible (rookie mistake). When my contract was up, the creative director called me into his office and said I hadn’t performed the way they’d expected. Which was odd, because neither of us really understood what had been expected. I felt shitty, wondering what I could’ve done better. And honestly, I’m sure the creative director felt shitty too, because he realized he hadn’t properly set expectations for success.
So yeah. Don’t do that. It should never be a surprise to anyone working for you that they’re doing badly. Or doing well for that matter. Let them know what they need to do to succeed. Let them know they’re succeeding. If they’re not succeeding, help them adjust course. And finally, let them know once they’ve succeeded.
Writing the job description
The most important thing about readying for a designer is figuring out how your company or organization benefits from their involvement. What will you be able to do once they’re here? Picture yourselves a year in the future. What do you hope to accomplish? Write those things down. They’re the basis for the job description you’re about to write.
Make a list of what you need this person to do. Not the technical skills they should have, but the needs you hope those skills will fulfill. Do you need branding? Interface design? Illustrations? Forms? What kind of business are you in? Is it editorial? Are you a retailer that needs a catalog designed? Don’t forget to take care of your mobile needs. Trust me, you have mobile needs. (Trust me, you’ve had them since yesterday.)
The result of this exercise may look something like this: “We need a designer with mobile experience that can do branding and interface design for complex data.” The longer that list gets, the more you’ll pay for a designer, and this exercise may help you realize that you need more than one person. A capable illustrator who can build a responsive site and understands agile workflow is a rare unicorn indeed.
Now let’s go find us some designers!
- Making Our Events More Inclusive For Those Under 21 (and Also Everyone Else)
On Saturday, Benjamin Hollway, a 16 year old front-end developer, wrote a post about his recent experiences attending industry events. He’s been coding since he was eight, and earlier this year he was shortlisted for Netmag’s Emerging Talent category. Yet none of the people in this category are able to participate fully in the sort of activities most of us take for granted.
Last week, Benjamin attended an event I spoke at in London. He’d saved up to buy a ticket and travel up to the conference, and after the event he followed everyone to the after party to chat about the conference and meet some of the speakers. Everyone was allowed in, but he was turned away at the door and had to head back home early.
This isn’t the first time he’s experienced this, and I remember far too well the same happening to me as well. Four years ago, I wrote about some of the difficulties I’d experienced as a young developer when it came to attending events. A lot of the meetups I wanted to go to were held in bars, and if there was someone checking IDs at the door, I couldn’t go.
After parties are a really important part of a conference. They’re where we get to network, ask speakers questions about the talk they’ve just given, and generally have a good time meeting like-minded people. But so many of these after parties, and even events, are held in pubs and bars, meaning they’re completely off-limits to young people.
I feel lucky that I live in a country where I could access most events when I turned 18 (although I have been prevented from going into others that are held in 21-or-over bars). In other countries, I wouldn’t be able to attend some events until I was 21.
@anna_debenham Agreed. There's nothing worse than being rejected for what constitutes the person you are and you have no control of.— Anne-Gaelle Colom (@agcolom) September 27, 2014
I know a lot of amazingly smart designers and developers who are under 18, and many of them are physically prevented from attending an industry event or after party after traveling all the way up and forking out often hundreds of pounds out of their own pocket to attend. The more young people we encourage to join the fold, the more we are excluding from these events.
.@anna_debenham Couldn't agree more with your 2010 blog post. Had to leave tech events a few times before I was 18 :(— Jordan Hatch (@1jh) September 27, 2014
Holding events in age-restricted venues doesn’t just exclude those under 21. It also turns away people who don’t drink for medical and personal reasons, or because of their faith, such as Muslims. They can’t simply wait until they get older before they can attend, some of people will never be able to attend.
If you’re an event or meetup organizer, please don’t exclude young designers and developers by holding your event in age-restricted venues. When London Web Standards realized that young developers who wanted to go couldn’t attend, they switched to holding their events in offices, making them accessible to both young people and people who would be excluded because of their faith, or for other reasons. They were delighted when young developers started to turn up to their events.
There are a lot more creative things to do around an event that don’t involve hanging around at a noisy bar, which is something Rachel Andrew wrote about last year:
Finally, how about taking Benjamin’s suggestion and asking young people to speak at your event? They have a huge amount to offer, and will help suggest ways to make your event more open, not just to those under 18, but also to groups of people you may not have even considered.
@anna_debenham there also seems to be a valuable crossover between avoiding age restricted locations and creating safe spaces. Win-win?— Matthew Wheeler (@Matt_Wheel) September 27, 2014
Oh, and if your event is open to young people, please add it to the Lanyrd list I’ve created for events open to those under 21 so that others can find it.
- Shellshock: A Bigger Threat than Heartbleed?
Time to update those Linux servers again. A newly-discovered Linux flaw may be more pervasive, and more dangerous, than last spring’s Heartbleed.
This new vulnerability, being called Shellshock, has been found in use on public servers, meaning the threat is not theoretical. A patch has been released, but according to Ars Technica, it’s unfortunately incomplete.
- Antoine Lefeuvre on The Web, Worldwide: The Culinary Model of Web Design
We call ourselves information architects, web designers or content strategists, among other job titles in the industry, including the occasional PHP ninja or SEO rockstar. The web does owe a lot to fields like architecture, industrial design, or marketing. I still haven’t met an interaction cook or maitre d’optimization, though. No web makers turn to chefs for inspiration, one might say.
Well, some do. Let me take you, s’il vous plaît, to Lyon, France, where people think sliced bread is the greatest thing since the internet.
Just a hundred miles from the web’s birthplace at CERN in Geneva lies Lyon, France’s second biggest city. It’s no internet mecca, but that doesn’t mean there are no lessons to be learned from how people make the web there. Unlike many places in the world where the latest new thing is everyone’s obsession, entrepreneurs in Lyon are quite interested in… the nineteenth century! What they’re analyzing is their city’s greatest success, its cuisine.
If Lyon’s food scene today is one the world’s best—even outshining Paris’ according to CNN, this is thanks to the Mères lyonnaises movement. These “mothers” were house cooks for Lyon’s rich people, who decided to emancipate and launch their own start-ups: humble restaurants aiming at top-quality food, not fanciness. The movement begun in the nineteenth century only grew bigger in the twentieth, when the Mères passed on their skills and values to the next generation. Their most famous heir is superstar chef Paul Bocuse, who has held the Michelin three-star rating longer than any other, and who began as the apprentice of Mère Eugénie Brazier, the mother of modern French cooking and one of the very first three-star chefs in 1928. “There’s a real parallel between the ecosystem the Mères started and what we want to achieve,” says Grégory Palayer, president of the aptly named local trade association La Cuisine du Web. To recreate the Mères’ recipe for success, the toqués—the nickname meaning both “chef’s hat” and “crazy” that’s given to La Cuisine du Web members—have identified its ingredients: networking, media support, funding, and transmitting skills and knowledge. Not to mention a secret plus: joie de vivre. “Parisians and Europeans are often surprised to see we can spend two hours having lunch,” says Grégory. “This is how we conduct business here!”
Lyon’s designers too have their nineteenth-century hero in Auguste Escoffier, the celebrity chef of his age. He began his career as a kitchen boy in his uncle’s restaurant and ended up running the kitchens in London’s most luxurious hotels. Renowned as “the Chef of Kings and the King of Chefs,” Escoffier was also a serial designer: his creations include Peach Melba, Crêpe Suzette, and the Cuisine classique style. He even experimented in a culinary form of design under constraint while in the army during the 1870 Franco-Prussian War, using horse meat for ordinary meals to save scarce beef for the wounded, and inventing 1,001 recipes with turnip, the only readily available vegetable on the front lines. Escoffier did much to improve and structure his industry. He was the first head of the WACS, the chefs’ W3C, and revolutionized not only French cooking, but the way restaurants worldwide are run, by championing documentation, standardization, and professionalism.
In his talk “Interaction Béchamel” at the Interaction 14 conference in Amsterdam, Lyon’s IxDA leader Guillaume Berry explained how the life and work of Escoffier could influence web design. Guillaume comes from a family of food lovers and makers. Himself a visual designer and an amateur cook, he is greatly inspired in his daily work by cuisine. “It’s all about quality ingredients and preparing them. I’ve realized this while chopping vegetables—a task often neglected or disliked.” The web’s raw ingredients are copy, images, videos: “Even a starred chef won’t be able to cook a proper dish with low-quality ingredients. Don’t expect a web designer to do wonders without great content.”
Just as Escoffier took Ritz customers on a kitchen tour, Guillaume recommends explaining to your clients how their site or app has been cooked. The more open and understood our design processes are, the more their value will be recognized. Have you ever been running late and prepared dinner in a rush? I have and it was, unsurprisingly, a disaster. So tell your clients their website is nothing but a good meal; it takes time to make it a memorable experience.
Looking back at other industries helps us see what’s ahead in ours. What could be the web’s answer to slow food, organic farming, or rawism? “How many interactions a day is it healthy for us to have?” asks Guillaume. He adds, “Cooks have a huge responsibility because depending on how they prepare the food they can make people sick.” Are we designers that powerful? Oh yes, and more—we destroyed the world, after all.
No, the web industry isn’t free of junk food. When we create apps that make a smartphone obsolete after two years: junk food. When we believe email is dead and Facebook is the new communication standard: junk food. When we design only for the latest browsers and fastest connections: junk food.
If we’re ready to move from “more” to “better,” let’s remember these simple rules from Eugénie Brazier: 1. Pick your ingredients very carefully; 2. Home-made first; 3. A flashy presentation won’t save a poor dish.
- It Was Just A Thing
A little less than two months ago, I wrote about the most dangerous word in software development: just. A lot of assumptions hide behind that seemingly harmless word, but there’s another side to it.
“It was just a thing we built to deploy our work to staging.”
“It was just a little plugin we built to handle responsive tab sets.”
“It was just a way to text a bunch of our friends at the same time.”
Some of the best and most useful things we build have humble beginnings. Small side projects start with a sapling of an idea—something that can be built in a weekend, but will make our work a little easier, our lives a little better.
We focus on solving a very specific problem, or fulfilling a very specific need. Once we start using the thing we’ve built, we realize its full potential. We refine our creation until it becomes something bigger and better. By building, using, and refining, we avoid the pitfalls of assumptions made by the harmful use of the word “just” that I warned about:
But the people who build something shouldn’t be the only ones who shape its future. When Twitter was founded, it was just a way to text a bunch of friends at once. The way that people used Twitter in the early days helped determine its future. Retweets, @username mentions, and hashtags became official parts of Twitter because of those early usage patterns.
Embrace the small, simple, focused start, and get something into people’s hands. Let usage patterns inform refinements, validate assumptions, and guide you to future success. It’s more than okay to start by building “just a thing”—in fact, I suggest it.
- Getting Started With CSS Audits
This week I wrote about conducting CSS audits to organize your code, keeping it clean and performant—resulting in faster sites that are easier to maintain. Now that you understand the hows and whys of auditing, let’s take a look at some more resources that will help you maintain your CSS architecture. Here are some I’ve recently discovered and find helpful.
- Harry Roberts has put together a fantastic resource for thinking about how to write large CSS systems, CSS Guidelines.
- Interested in making the style guide part of the audit easier? This Github repo includes a whole bunch of info on different generators.
Help from task runners
Do you like task runners such as grunt or gulp? Andy Osmani’s tutorial walks through using all kinds of task runners to find unused CSS selectors: Spring Cleaning Unused CSS Selectors.
Are you interested in auditing for accessibility as well (hopefully you are!)? There are tools for that, too. This article helps you audit your site for accessibility— it’s a great outline of exactly how to do it.
- Sitepoint takes a look at trimming down overall page weight, which would optimize your site quite a bit.
- Google Chrome’s dev tools include a built-in audit tool, which suggests ways you could improve performance. A great article on HTML5 Rocks goes through this tool in depth.
With these tools, you’ll be better prepared to clean up your CSS, optimize your site, and make the entire experience better for users. When talking about auditing code, many people are focusing on performance, which is a great benefit for all involved, but don’t forget that maintainability and speedier development time come along with a faster site.
- Client Education and Post-Launch Success
What our clients do with their websites is just as important as the websites themselves. We may pride ourselves on building a great product, but it’s ultimately up to the client to see it succeed or fail. Even the best website can become neglected, underused, or messy without a little education and training.
Too often, my company used to create amazing tools for clients and then send them out into the world without enough guidance. We’d watch our sites slowly become stale, and we’d see our strategic content overwritten with fluffy filler.
It was no one’s fault but our own.
As passionate and knowledgeable web enthusiasts, it’s literally our job to help our clients succeed in any way we can, even after launch. Every project is an opportunity to educate clients and build a mutually beneficial learning experience.
Meeting in the middle
If we want our clients to use our products to their full potential, we have to meet them in the middle. We have to balance our technical expertise with their existing processes and skills.
At my company, Brolik, we learned this the hard way.
We had a financial client whose main revenue came from selling in-depth PDF reports. Customers would select a report, generating an email to an employee who would manually create and email an unprotected PDF to the customer. The whole process would take about two days.
To make the process faster and more secure, we built an advanced, password-protected portal where their customers could purchase and access only the reports they’d paid for. The PDFs themselves were generated on the fly from the content management system. They were protected even after they were downloaded and only viewable with a unique username and password generated with the PDF.
The system itself was technically advanced and thoroughly solved our client’s needs. When the job was done, we patted ourselves on the back, added the project to our portfolio, and moved on to the next thing.
The client, however, was generally confused by the system we’d built. They didn’t quite know how to explain it to their customers. Processes had been automated to the point where they seemed untrustworthy. After about a month, they asked us if we’d revert back to their previous system.
We had created too large of a process change for our client. We upended a large part of their business model without really considering whether they were ready for a new approach.
From that experience, we learned not only to create online tools that complement our clients’ existing business processes, but also that we can be instrumental in helping clients embrace new processes. We now see it as part of our job to educate our clients and explain the technical and strategic thought behind all of our decisions.
Leading by example
We put this lesson to work on a more recent project, developing a site-wide content tagging system where images, video, and other media could be displayed in different ways based on how they were tagged.
We could have left our clients to figure out this new system on their own, but we wanted to help them adopt it. So we pre-populated content and tags to demonstrate functionality. We walked through the tagging process with as many stakeholders as we could. We even created a PDF guide to explain the how and why behind the new system.
In this case, our approach worked, and the client’s cumbersome media management time was significantly reduced. The difference between the outcome of the two projects was simply education and support.
Education and support can, and usually does, take the form of setting an example. Some clients may not fully understand the benefits of a content strategy, for instance, so you have to show them results. Create relevant and well-written sample blog posts for them, and show how they can drive website traffic. Share articles and case studies that relate to the new tools you’re building for them. Show them that you’re excited, because excitement is contagious. If you’re lucky and smart enough to follow Geoff Dimasi’s advice and work with clients who align with your values, this process will be automatic, because you’ll already be invested in their success.
We should be teaching our clients to use their website, app, content management system, or social media correctly and wisely. The more adept they are at putting our products to use, the better our products perform.
Dealing with budgets
Client education means new deliverables, which have to be prepared by those directly involved in the project. Developers, designers, project managers, and other team members are responsible for creating the PDFs, training workshops, interactive guides, and other educational material.
That means more organizing, writing, designing, planning, and coding—all things we normally bill for, but now we have to bill in the name of client education.
Take this into account at the beginning of a project. The amount of education a client needs can be a consideration for taking a job at all, but it should at least factor into pricing. Hours spent helping your client use your product is billable time that you shouldn’t give away for free.
At Brolik, we’ve helped a range of clients—from those who have “just accepted that the Web isn’t a fad” (that’s an actual quote from 2013), to businesses that have a team of in-house developers. We consider this information and price accordingly, because it directly affects the success of the entire product and partnership. If they need a lot of education but they’re not willing to pay for it, it may be smart to pass on the job.
Most clients actually understand this. Those who are interested in improving their business are interested in improving themselves as well. This is the foundation for a truly fulfilling and mutually beneficial client relationship. Seek out these relationships.
It’s sometimes challenging to justify a “client education” line item in your proposals, however. If you can’t, try to at least work some wiggle room into your price. More specifically, try adding a 10 percent contingency for “Support and Training” or “Onboarding.”
If you can’t justify a price increase at all, but you still want the job, consider factoring in a few client education hours and their opportunity cost as part of your company’s overall marketing budget. Teaching your client to use your product is your responsibility as a digital business.
This never ends (hopefully)
What’s better than arming your clients with knowledge and tools, pumping them up, and then sending them out into the world to succeed? Venturing out with them!
At Brolik, we’ve started signing clients onto digital strategy retainers once their websites are completed. Digital strategy is an overarching term that covers anything and everything to grow a business online. Specifically for us, it includes audience research, content creation, SEO, search and display advertising, website maintenance, social media, and all kinds of analysis and reporting.
This allows us to continue to educate (and learn) on an ongoing basis. It keeps things interesting—and as a bonus, we usually upsell more work.
We’ve found that by fostering collaboration post-launch, we not only help our clients use our product more effectively and grow their business, but we also alleviate a lot of the panic that kicks in right before a site goes live. They know we’ll still be there to fix, tweak, analyze, and even experiment.
This ongoing digital strategy concept was so natural for our business that it’s surprising it took us so long to implement it. After 10 years making websites, we’ve only offered digital strategy for the last two, and it’s already driving 50 percent of our revenue.
It pays to be along for the ride
The extra effort required for client education is worth it. By giving our clients the tools, knowledge, and passion they need to be successful with what we’ve built for them, we help them improve their business.
Anything that drives their success ultimately drives ours. When the tools we build work well for our clients, they return to us for more work. When their websites perform well, our portfolios look better and live longer. Overall, when their business improves, it reflects well on us.
A fulfilling and mutually beneficial client relationship is good for the client and good for future business. It’s an area where we can follow our passion and do what’s right, because we get back as much as we put in.
- CSS Audits: Taking Stock of Your Code
Most people aren’t excited at the prospect of auditing code, but it’s become one of my favorite types of projects. A CSS audit is really detective work. You start with a site’s code and dig deeper: you look at how many stylesheets are being called, how that affects site performance, and how the CSS itself is written. Your goal is to look for ways to improve on what’s there—to sleuth out fixes to make your codebase better and your site faster.
I’ll share tips on how to approach your own audit, along with the advantages of taking a full inventory of your CSS and various tools.
Benefits of an audit
An audit helps you to organize your code and eliminate repetition. You don’t write any code during an audit; you simply take stock of what’s there and document recommendations to pass off to a client or discuss with your team. These recommendations ensure new code won’t repeat past mistakes. Let’s take a closer look at other benefits:
- Reduce file sizes. A complete overview of the CSS lets you take the time to find ways to refactor the code: to clean it up and perhaps cut down on the number of properties. You can also hunt for any odds and ends, such as outdated versions of browser prefixes, that aren’t in use anymore. Getting rid of unused or unnecessary code trims down the file people have to download when they visit your site.
- Ensure consistency with guidelines. As you audit, create documentation regarding your styles and what’s happening with the site or application. You could make a formal style guide, or you could just write out recommendations to note how different pieces of your code are used. Whatever form your documentation takes, it’ll save anyone coming onto your team a lot of time and trouble, as they can easily familiarize themselves with your site’s CSS and architecture.
- Standardize your code. Code organization—which certainly attracts differing opinions—is essential to keeping your codebase more maintainable into the future. For instance, if you choose to alphabetize your properties, you can readily spot duplicates, because you’d end up with two sets of margin properties right next to each other. Or you may prefer to group properties according to their function: positioning, box model-related, etc. Having a system in place helps you guard against repetition.
- Increase performance. I’ve saved the best for last. Auditing code, along with combining and zipping up stylesheets, leads to markedly faster site speeds. For example, Harry Roberts, a front-end architect in the UK who conducts regular audits, told me about a site he recently worked on:
This is a huge win, especially for people on slower connections—but everyone gains when sites load quickly.
How to audit: take inventory
Now that audits have won you over, how do you go about doing one? I like to start with a few tools that provide an overview of the site’s current codebase. You may approach your own audit differently, based on your site’s problem areas or your philosophy of how you write code (whether OOCSS or BEM). The important thing is to keep in mind what will be most useful to you and your own site.
Once I’ve diagnosed my code through tools, I examine it line by line.
The first tool I reach for is Nicole Sullivan’s invaluable Type-o-matic, an add-on for Firebug that generates a JSON report of all the type styles in use across a site. As an added bonus, Type-o-matic creates a visual report as it runs. By looking at both reports, you know at a glance when to combine type styles that are too similar, eliminating unnecessary styles. I’ve found that the detail of the JSON report makes it easy to see how to create a more reusable type system.
In addition to Type-o-matic, I run CSS Lint, an extremely flexible tool that flags a wide range of potential bugs from missing fallback colors to shorthand properties for better performance. To use CSS Lint, click the arrow next to the word “Lint” and choose the options you want. I like to check for repeated properties or too many font sizes, so I always run Maintainability & Duplication along with Performance. CSS Lint then returns recommendations for changes; some may be related to known issues that will break in older browsers and others may be best practices (as the tool sees them). CSS Lint isn’t perfect. If you run it leaving every option checked, you are bound to see things in the end report that you may not agree with, like warnings for IE6. That said, this is a quick way to get a handle on the overall state of your CSS.
Next, I search through the CSS to review how often I repeat common properties, like
margin. (If you’re comfortable with the command line, type
grepalong with instructions and plug in something like
grep “float” styles/styles.scssto find all instances of
“float”.) Note any properties you may cut or bundle into other modules. Trimming your properties is a balancing act: to reduce the number of repeated properties, you may need to add more classes to your HTML, so that’s something you’ll need to gauge according to your project.
I like to do this step by hand, as it forces me to walk through the CSS on my own, which in turn helps me better understand what’s going on. But if you’re short on time, or if you’re not yet comfortable with the command line, tools can smooth the way:
- CSS Dig is an automated script that runs through all of your code to help you see it visually. A similar tool is StyleStats, where you type in a url to survey its CSS.
- CSS Colorguard is a brand-new tool that runs on Node and outputs a report based on your colors, so you know if any colors are too alike. This helps limit your color palette, making it easier to maintain in the future.
- Dust-Me Selectors is an add-on for Firebug in Firefox that finds unused selectors.
Line by line
After you run your tools, take the time to read through the CSS; it’s worth it to get a real sense of what’s happening. For instance, comments in the code—that tools miss—may explain why some quirk persists.
One big thing I double-check is the depth of applicability, or how far down an attribute string applies. Does your CSS rely on a lot of specificity? Are you seeing long strings of selectors, either in the style files themselves or in the output from a preprocessor? A high depth of applicability means your code will require a very specific HTML structure for styles to work. If you can scale it back, you’ll get more reusable code and speedier performance.
Review and recommend
Now to the fun part. Once you have all your data, you can figure out how to improve the CSS and make some recommendations.
The recommendation document doesn’t have to be heavily designed or formatted, but it should be easy to read. Splitting it into two parts is a good idea. The first consists of your review, listing the things you’ve found. If you refer to the results of CSS Lint or Type-o-matic, be sure to include either screenshots or the JSON report itself as an attachment. The second half contains your actionable recommendations to improve the code. This can be as simple as a list, with items like “Consolidate type styles that are closely related and create mixins for use sitewide.”
As you analyze all the information you’ve collected, look for areas where you can:
- Tighten code. Do you have four different sets of styles for a call-out box, several similar link styles, or way too many exceptions to your standard grid? These are great candidates for repeatable modular styles. To make consolidation even easier, you could use a preprocessor like Sass to turn them into mixins or extend, allowing styles to be applied when you call them on a class. (Just check that the outputted code is sensible too.)
- Keep code consistent. A good audit makes sure the code adheres to its own philosophy. If your CSS is written based on a particular approach, such as BEM or OOCSS, is it consistent? Or do styles veer from time to time, and are there acceptable deviations? Make sure you document these exceptions, so others on your team are aware.
If you’re working with a client, it’s also important to explain the approaches you favor, so they understand where you’re coming from—and what things you may consider as issues with the code. For example, I prefer OOCSS, so I tend to push for more modularity and reusability; a few classes stacked up (if you aren’t using a preprocessor) don’t bother me. Making sure your client understands the context of your work is particularly crucial when you’re not on the implementation team.
Hand off to the client
You did it! Once you’ve written your recommendations (and taken some time to think on them and ensure they’re solid), you can hand them off to the client—be prepared for any questions they may have. If this is for your team, congratulations: get cracking on your list.
But wait—an audit has even more rewards. Now that you’ve got this prime documentation, take it a step further: use it as the springboard to talk about how to maintain your CSS going forward. If the same issues kept popping up throughout your code, document how you solved them, so everyone knows how to proceed in the future when creating new features or sections. You may turn this document into a style guide. Another thing to consider is how often to revisit your audit to ensure your codebase stays squeaky clean. The timing will vary by team and project, but set a realistic, regular schedule—this a key part of the auditing process.
Conducting an audit is a vital first step to keeping your CSS lean and mean. It also helps your documentation stay up to date, allowing your team to have a good handle on how to move forward with new features. When your code is structured well, it’s more performant—and everyone benefits. So find the time, grab your best sleuthing hat, and get started.
- Rian van der Merwe on A View from a Different Valley: Work Life Imbalance
I’m old enough to remember when laptops entered the workforce. It was an amazing thing. At first only the select few could be seen walking around with their giant black IBMs and silver Dells. It took a few years, but eventually every new job came with the question we all loved to hear: “desktop or laptop?”
I was so happy when I got my first laptop at work. “Man,” I thought, “now I can work anywhere, any time!” It was fun for a while, until I realized that now I could work anywhere, any time. Slowly our office started to reflect this newfound freedom. Work looked less and less like work, and more and more like home. Home offices became a big thing, and it’s now almost impossible to distinguish between home offices of famous designers and the workspaces (I don’t think we even call them “offices” any more) of most startups.
Work and life: does it blend?
There is a blending of work and life that woos us with its promise of barbecues at work and daytime team celebrations at movie theaters, but we’re paying for it in another way: a complete eradication of the line between home life and work life. “Love what you do,” we say. “Get a job you don’t want to take a vacation from,” we say—and we sit back and watch the retweets stream in.
I don’t like it.
I don’t like it for two reasons.
It makes us worse at our jobs
There’s plenty of research that shows when employers place strict limits on messaging, employees are happier and enjoy their work more. And productivity isn’t affected negatively at all. Clive Thompson’s article about this for Mother Jones is a great overview of what we know about the handful of experiments that have been done to research the effects of messaging limits.
But that’s not even the whole story. It’s not just that constantly thinking about work makes us more stressed, it’s also that our fear of doing nothing—of not being productive every second of the day—is hurting us as well (we’ll talk about side projects another time). There’s plenty of research about this as well, but let’s stick with Jessica Stillman’s Bored at Work? Good. It’s a good overview of what scientists have found on the topic of giving your mind time to rest. In short, being idle tells your brain that it’s in need of something different, which stimulates creative thinking. So it’s something to be sought out and cherished—not something to be shunned.
It teaches that boundaries are bad
The second problem I have with our constant pursuit of the productivity train is that it teaches us that setting boundaries to spend time with our friends and family = laziness. I got some raised eyebrows at work recently when I declined an invitation to watch a World Cup game in a conference room. But here’s the thing. If I watch the World Cup game with a bunch of people at work today, guess what I have to do tonight? I have to work to catch up, instead of spending time with my family. And that is not ok with me.
I have a weird rule about this. Work has me—completely—between the hours of 8:30 a.m. and 6:00 p.m. It has 100 percent of my attention. But outside of those hours I consider it part of being a sane and good human to give my kids a bath, chat to my wife, read, and reflect on the day that’s past and the one that’s coming—without the pressure of having to be online all the time. I swear it makes me a better (and more productive) employee, but I can’t shake the feeling that I shouldn’t be writing this down because you’re just going to think I’m lazy.
But hey, I’m going to face my fear and just come right out and say it: I try not to work nights. There. That felt good.
It doesn’t always work out, and of course there are times when a need is pressing and I take care of it at night. I don’t have a problem with that. But I don’t sit and do email for hours every night. See, the time I spend with people is what gives my work meaning. I do what I do for them—for the people in my life, the people I know, and the people I don’t. If we never spend time away from our work, how can we understand the world and the people we make things for?
Permission to veg out
So I guess this column is my attempt to give you permission to do nothing every once in a while. Not to be lazy, or not do your job. But to take the time you need to get better at what you do, and enjoy it a lot more.
As this column evolves, I think this is what I’ll be talking about a lot. How to make the hours we have at work count more. How to think of what we do not as the tech business but the people business. How to give ourselves permission to experience the world around us and get inspiration for our work from that. How to be flâneur: wandering around with eyes wide open to inspiration.
- Awkward Cousins
As an industry, we’re historically terrible at drawing lines between things. We try to segment devices based on screen size, but that doesn’t take into account hardware functionality, form factor, and usage context, for starters. The laptop I’m writing this on has the same resolution as a 1080p television. They’d be lumped into the same screen-size–dependent groups, but they are two totally different device classes, so how do we determine what goes together?
That’s a simple example, but it points to a larger issue. We so desperately want to draw lines between things, but there are often too many variables to make those lines clean.
Why, then, do we draw such strict lines between our roles on projects? What does the area of overlap between a designer and front-end developer look like? A front- and back-end developer? A designer and back-end developer? The old thinking of defined roles is certainly loosening up, but we still have a long way to go.
The chasm between roles that is most concerning is the one between web designers/developers and native application designers/developers. We often choose a camp early on and stick to it, which is a mindset that may have been fueled by the false “native vs. web” battle a few years ago. It was positioned as an either-or decision, and hybrid approaches were looked down upon.
The people using the things we build don’t see the divide as harshly as we do, if at all. More importantly, the development environments are becoming more similar, as well. Swift, Apple’s brand new programming language for iOS and Mac development, has a strong resemblance to the languages we know and love on the web, and that’s no accident. One of Apple’s top targets for Swift, if not the top target, is the web development community. It’s a massive, passionate, and talented pool of developers who, largely, have not done iOS or Mac work—yet.
As someone who spans the divide regularly, it’s sad to watch these two communities keep at arm’s length like awkward cousins at a family reunion. We have so much in common—interests, skills, core values, and a ton of technological ancestry. The difference between the things we build is shrinking in the minds of our shared users, and the ways we build those things are aligning. I dream of the day when we get over our poorly drawn lines and become the big, happy community I know we can be.
- Watch: A New Documentary About Jeffrey Zeldman
It’s a philosophy that’s always guided us at A List Apart: that we all learn more—and are more successful—when we share what we know with anyone who wants to listen. And it comes straight from our publisher, Jeffrey Zeldman.
For 20 years, he’s been sharing everything he can with us, the people who make websites—from advice on table layouts in the ‘90s to Designing With Web Standards in the 2000s to educating the next generation of designers today.
Our friends at Lynda.com just released a documentary highlighting Jeffrey’s two decades of designing, organizing, and most of all sharing on the web. You should watch it.
- Git: The Safety Net for Your Projects
I remember January 10, 2010, rather well: it was the day we lost a project’s complete history. We were using Subversion as our version control system, which kept the project’s history in a central repository on a server. And we were backing up this server on a regular basis—at least, we thought we were. The server broke down, and then the backup failed. Our project wasn’t completely lost, but all the historic versions were gone.
Shortly after the server broke down, we switched to Git. I had always seen version control as torturous; it was too complex and not useful enough for me to see its value, though I used it as a matter of duty. But once we’d spent some time on the new system, and I began to understand just how helpful Git could be. Since then, it has saved my neck in many situations.
During the course of this article, I’ll walk through how Git can help you avoid mistakes—and how to recover if they’ve already happened.
Every teammate is a backup
Since Git is a distributed version control system, every member of our team that has a project cloned (or “checked out,” if you’re coming from Subversion) automatically has a backup on his or her disk. This backup contains the latest version of the project, as well as its complete history.
This means that should a developer’s local machine or even our central server ever break down again (and the backup not work for any reason), we’re up and running again in minutes: any local repository from a teammate’s disk is all we need to get a fully functional replacement.
Branches keep separate things separate
When my more technical colleagues told me about how “cool” branching in Git was, I wasn’t bursting with joy right away. First, I have to admit that I didn’t really understand the advantages of branching. And second, coming from Subversion, I vividly remembered it being a complex and error-prone procedure. With some bad memories, I was anxious about working with branches and therefore tried to avoid it whenever I could.
It took me quite a while to understand that branching and merging work completely differently in Git than in most other systems—especially regarding its ease of use! So if you learned the concept of branches from another version control system (like Subversion), I recommend you forget your prior knowledge and start fresh. Let’s start by understanding why branches are so important in the first place.
Why branches are essential
Back in the days when I didn’t use branches, working on a new feature was a mess. Essentially, I had the choice between two equally bad workflows:
(a) I already knew that creating small, granular commits with only a few changes was a good version control habit. However, if I did this while developing a new feature, every commit would mingle my half-done feature with the main code base until I was done. It wasn’t very pleasant for my teammates to have my unfinished feature introduce bugs into the project.
(b) To avoid getting my work-in-progress mixed up with other topics (from colleagues or myself), I’d work on a feature in my separate space. I would create a copy of the project folder that I could work with quietly—and only commit my feature once it was complete. But committing my changes only at the end produced a single, giant, bloated commit that contained all the changes. Neither my teammates nor I could understand what exactly had happened in this commit when looking at it later.
I slowly understood that I had to make myself familiar with branches if I wanted to improve my coding.
Working in contexts
Any project has multiple contexts where work happens; each feature, bug fix, experiment, or alternative of your product is actually a context of its own. It can be seen as its own “topic,” clearly separated from other topics.
If you don’t separate these topics from each other with branching, you will inevitably increase the risk of problems. Mixing different topics in the same context:
- makes it hard to keep an overview—and with a lot of topics, it becomes almost impossible;
- makes it hard to undo something that proved to contain a bug, because it’s already mingled with so much other stuff;
- doesn’t encourage people to experiment and try things out, because they’ll have a hard time getting experimental code out of the repository once it’s mixed with stable code.
Using branches gave me the confidence that I couldn’t mess up. In case things went wrong, I could always go back, undo, start fresh, or switch contexts.
Branching in Git actually only involves a handful of commands. Let’s look at a basic workflow to get you started.
To create a new branch based on your current state, all you have to do is pick a name and execute a single command on your command line. We’ll assume we want to start working on a new version of our contact form, and therefore create a new branch called “contact-form”:
$ git branch contact-form
git branchcommand without a name specified will list all of the branches we currently have (and the “-v” flag provides us with a little more data than usual):
$ git branch -v
You might notice the little asterisk on the branch named “master.” This means it’s the currently active branch. So, before we start working on our contact form, we need to make this our active context:
$ git checkout contact-form
Git has now made this branch our current working context. (In Git lingo, this is called the “HEAD branch”). All the changes and every commit that we make from now on will only affect this single context—other contexts will remain untouched. If we want to switch the context to a different branch, we’ll simply use the
git checkoutcommand again.
In case we want to integrate changes from one branch into another, we can “merge” them into the current working context. Imagine we’ve worked on our “contact-form” feature for a while, and now want to integrate these changes into our “master” branch. All we have to do is switch back to this branch and call git merge:
$ git checkout master $ git merge contact-form
I would strongly suggest that you use branches extensively in your day-to-day workflow. Branches are one of the core concepts that Git was built around. They are extremely cheap and easy to create, and simple to manage—and there are plenty of resources out there if you’re ready to learn more about using them.
There’s one thing that I’ve learned as a programmer over the years: mistakes happen, no matter how experienced people are. You can’t avoid them, but you can have tools at hand that help you recover from them.
One of Git’s greatest features is that you can undo almost anything. This gives me the confidence to try out things without fear—because, so far, I haven’t managed to really break something beyond recovery.
Amending the last commit
Even if you craft your commits very carefully, it’s all too easy to forget adding a change or mistype the message. With the
—amendflag of the
git commitcommand, Git allows you to change the very last commit, and it’s a very simple fix to execute. For example, if you forgot to add a certain change and also made a typo in the commit subject, you can easily correct this:
$ git add some/changed/files $ git commit --amend -m "The message, this time without typos"
There’s only one thing you should keep in mind: you should never amend a commit that has already been pushed to a remote repository. Respecting this rule, the “amend” option is a great little helper to fix the last commit.
(For more detail about the
amendoption, I recommend Nick Quaranto’s excellent walkthrough.)
Undoing local changes
Changes that haven’t been committed are called “local.” All the modifications that are currently present in your working directory are “local” uncommitted changes.
Discarding these changes can make sense when your current work is… well… worse than what you had before. With Git, you can easily undo local changes and start over with the last committed version of your project.
If it’s only a single file that you want to restore, you can use the
$ git checkout -- file/to/restore
Don’t confuse this use of the
checkoutcommand with switching branches (see above). If you use it with two dashes and (separated with a space!) the path to a file, it will discard the uncommitted changes in a given file.
On a bad day, however, you might even want to discard all your local changes and restore the complete project:
$ git reset --hard HEAD
This will replace all of the files in your working directory with the last committed revision. Just as with using the checkout command above, this will discard the local changes.
Be careful with these operations: since local changes haven’t been checked into the repository, there is no way to get them back once they are discarded!
Undoing committed changes
Of course, undoing things is not limited to local changes. You can also undo certain commits when necessary—for example, if you’ve introduced a bug.
Basically, there are two main commands to undo a commit:
(a) git reset
git resetcommand really turns back time. You tell it which version you want to return to and it restores exactly this state—undoing all the changes that happened after this point in time. Just provide it with the hash ID of the commit you want to return to:
$ git reset -- hard 2be18d9
—hardoption is the easiest and cleanest approach, but it also wipes away all local changes that you might still have in your working directory. So, before doing this, make sure there aren’t any local changes you’ve set your heart on.
(b) git revert
git revertcommand is used in a different scenario. Imagine you have a commit that you don’t want anymore—but the commits that came afterwards still make sense to you. In that case, you wouldn’t use the
git resetcommand because it would undo all those later commits, too!
revertcommand, however, only reverts the effects of a certain commit. It doesn’t remove any commits, like
git resetdoes. Instead, it even creates a new commit; this new commit introduces changes that are just the opposite of the commit to be reverted. For example, if you deleted a certain line of code,
revertwill create a new commit that introduces exactly this line, again.
To use it, simply provide it with the hash ID of the commit you want reverted:
$ git revert 2be18d9
When it comes to finding bugs, I must admit that I’ve wasted quite some time stumbling in the dark. I often knew that it used to work a couple of days ago—but I had no idea where exactly things went wrong. It was only when I found out about
git bisectthat I could speed up this process a bit. With the
bisectcommand, Git provides a tool that helps you find the commit that introduced a problem.
Imagine the following situation: we know that our current version (tagged “2.0”) is broken. We also know that a couple of commits ago (our version “1.9”), everything was fine. The problem must have occurred somewhere in between.
This is already enough information to start our bug hunt with
$ git bisect start $ git bisect bad $ git bisect good v1.9
After starting the process, we told Git that our current commit contains the bug and therefore is “bad.” We then also informed Git which previous commit is definitely working (as a parameter to
git bisect good).
Git then restores our project in the middle between the known good and known bad conditions:
We now test this version (for example, by running unit tests, building the app, deploying it to a test system, etc.) to find out if this state works—or already contains the bug. As soon as we know, we tell Git again—either with
git bisect bador
git bisect good.
Let’s assume we said that this commit was still “bad.” This effectively means that the bug must have been introduced even earlier—and Git will again narrow down the commits in question:
This way, you’ll find out very quickly where exactly the problem occurred. Once you know this, you need to call
git bisect resetto finish your bug hunt and restore the project’s original state.
A tool that can save your neck
I must confess that my first encounter with Git wasn’t love at first sight. In the beginning, it felt just like my other experiences with version control: tedious and unhelpful. But with time, the practice became intuitive, and gained my trust and confidence.
After all, mistakes happen, no matter how much experience we have or how hard we try to avoid them. What separates the pro from the beginner is preparation: having a system in place that you can trust in case of problems. It helps you stay on top of things, especially in complex projects. And, ultimately, it helps you become a better professional.
- Feel free to learn more about amending, reverting, and resetting commits.
- Make yourself familiar with “git bisect” with this detailed example.
- A detailed introduction to branching.
- Running Code Reviews with Confidence
Growing up, I learned there were two kinds of reviews I could seek out from my parents. One parent gave reviews in the form of a shower of praise. The other parent, the one with a degree from the Royal College of Art, would put me through a design crit. Today the reviews I seek are for my code, not my horse drawings, but it continues to be a process I both dread and crave.
In this article, I’ll describe my battle-tested process for conducting code reviews, highlighting the questions you should ask during the review process as well as the necessary version control commands to download and review someone’s work. I’ll assume your team uses Git to store its code, but the process works much the same if you’re using any other source control system.
Completing a peer review is time-consuming. In the last project where I introduced mandatory peer reviews, the senior developer and I estimated that it doubled the time to complete each ticket. The reviews introduced more context-switching for the developers, and were a source of increased frustration when it came to keeping the branches up to date while waiting for a code review.
The benefits, however, were huge. Coders gained a greater understanding of the whole project through their reviews, reducing silos and making onboarding easier for new people. Senior developers had better opportunities to ask why decisions were being made in the codebase that could potentially affect future work. And by adopting an ongoing peer review process, we reduced the amount of time needed for human quality assurance testing at the end of each sprint.
Let’s walk through the process. Our first step is to figure out exactly what we’re looking for.
Determine the purpose of the proposed change
Our code review should always begin in a ticketing system, such as Jira or GitHub. It doesn’t matter if the proposed change is a new feature, a bug fix, a security fix, or a typo: every change should start with a description of why the change is necessary, and what the desired outcome will be once the change has been applied. This allows us to accurately assess when the proposed change is complete.
The ticketing system is where you’ll track the discussion about the changes that need to be made after reviewing the proposed work. From the ticketing system, you’ll determine which branch contains the proposed code. Let’s pretend the ticket we’re reviewing today is 61524—it was created to fix a broken link in our website. It could just as equally be a refactoring, or a new feature, but I’ve chosen a bug fix for the example. No matter what the nature of the proposed change is, having each ticket correspond to only one branch in the repository will make it easier to review, and close, tickets.
Set up your local environment and ensure that you can reproduce what is currently the live site—complete with the broken link that needs fixing. When you apply the new code locally, you want to catch any regressions or problems it might introduce. You can only do this if you know, for sure, the difference between what is old and what is new.
Review the proposed changes
At this point you’re ready to dive into the code. I’m going to assume you’re working with Git repositories, on a branch-per-issue setup, and that the proposed change is part of a remote team repository. Working directly from the command line is a good universal approach, and allows me to create copy-paste instructions for teams regardless of platform.
To begin, update your local list of branches.
Then list all available branches.
git branch -a
A list of branches will be displayed to your terminal window. It may appear something like this:
* master remotes/origin/master remotes/origin/HEAD -> origin/master remotes/origin/61524-broken-link
*denotes the name of the branch you are currently viewing (or have “checked out”). Lines beginning with
remotes/originare references to branches we’ve downloaded. We are going to work with a new, local copy of branch
When you clone your project, you’ll have a connection to the remote repository as a whole, but you won’t have a read-write relationship with each of the individual branches in the remote repository. You’ll make an explicit connection as you switch to the branch. This means if you need to run the command
git pushto upload your changes, Git will know which remote repository you want to publish your changes to.
git checkout --track origin/61524-broken-link
Ta-da! You now have your own copy of the branch for ticket 61524, which is connected (“tracked”) to the origin copy in the remote repository. You can now begin your review!
First, let’s take a look at the commit history for this branch with the command
git log master..
Date: Mon Jun 30 17:23:09 2014 -0400 Link to resources page was incorrectly spelled. Fixed. Resolves #61524.
This gives you the full log message of all the commits that are in the branch
61524-broken-link, but are not also in the
masterbranch. Skim through the messages to get a sense of what’s happening.
Next, take a brief gander through the commit itself using the
diffcommand. This command shows the difference between two snapshots in your repository. You want to compare the code on your checked-out branch to the branch you’ll be merging “to”—which conventionally is the
git diff master
How to read patch files
When you run the command to output the difference, the information will be presented as a patch file. Patch files are ugly to read. You’re looking for lines beginning with
-. These are lines that have been added or removed, respectively. Scroll through the changes using the up and down arrows, and press
qto quit when you’ve finished reviewing. If you need an even more concise comparison of what’s happened in the patch, consider modifying the diff command to list the changed files, and then look at the changed files one at a time:
git diff master --name-only git diff master <filename>
Let’s take a look at the format of a patch file.
diff --git a/about.html b/about.html index a3aa100..a660181 100644 --- a/about.html +++ b/about.html @@ -48,5 +48,5 @@ (2004-05) - A full list of <a href="emmajane.net/events">public + A full list of <a href="http://emmajane.net/events">public presentations and workshops</a> Emma has given is available
I tend to skim past the metadata when reading patches and just focus on the lines that start with
+. This means I start reading at the line immediate following
@@. There are a few lines of context provided leading up to the changes. These lines are indented by one space each. The changed lines of code are then displayed with a preceding
-(line removed), or
Going beyond the command line
Using a Git repository browser, such as gitk, allows you to get a slightly better visual summary of the information we’ve looked at to date. The version of Git that Apple ships with does not include gitk—I used Homebrew to re-install Git and get this utility. Any repository browser will suffice, though, and there are many GUI clients available on the Git website.
When you run the command
gitk, a graphical tool will launch from the command line. An example of the output is given in the following screenshot. Click on each of the commits to get more information about it. Many ticket systems will also allow you to look at the changes in a merge proposal side-by-side, so if you’re finding this cumbersome, click around in your ticketing system to find the comparison tools they might have—I know for sure GitHub offers this feature.
Now that you’ve had a good look at the code, jot down your answers to the following questions:
- Does the code comply with your project’s identified coding standards?
- Does the code limit itself to the scope identified in the ticket?
- Does the code follow industry best practices in the most efficient way possible?
- Has the code been implemented in the best possible way according to all of your internal specifications? It’s important to separate your preferences and stylistic differences from actual problems with the code.
Apply the proposed changes
Now is the time to start up your testing environment and view the proposed change in context. How does it look? Does your solution match what the coder thinks they’ve built? If it doesn’t look right, do you need to clear the cache, or perhaps rebuild the Sass output to update the CSS for the project?
Now is the time to also test the code against whatever test suite you use.
- Does the code introduce any regressions?
- Does the new code perform as well as the old code? Does it still fall within your project’s performance budget for download and page rendering times?
- Are the words all spelled correctly, and do they follow any brand-specific guidelines you have?
Depending on the context for this particular code change, there may be other obvious questions you need to address as part of your code review.
Do your best to create the most comprehensive list of everything you can find wrong (and right) with the code. It’s annoying to get dribbles of feedback from someone as part of the review process, so we’ll try to avoid “just one more thing” wherever we can.
Prepare your feedback
Let’s assume you’ve now got a big juicy list of feedback. Maybe you have no feedback, but I doubt it. If you’ve made it this far in the article, it’s because you love to comb through code as much as I do. Let your freak flag fly and let’s get your review structured in a usable manner for your teammates.
For all the notes you’ve assembled to date, sort them into the following categories:
- The code is broken. It doesn’t compile, introduces a regression, it doesn’t pass the testing suite, or in some way actually fails demonstrably. These are problems which absolutely must be fixed.
- The code does not follow best practices. You have some conventions, the web industry has some guidelines. These fixes are pretty important to make, but they may have some nuances which the developer might not be aware of.
- The code isn’t how you would have written it. You’re a developer with battle-tested opinions, and you know you’re right, you just haven’t had the chance to update the Wikipedia page yet to prove it.
Submit your evaluation
Based on this new categorization, you are ready to engage in passive-aggressive coding. If the problem is clearly a typo and falls into one of the first two categories, go ahead and fix it. Obvious typos don’t really need to go back to the original author, do they? Sure, your teammate will be a little embarrassed, but they’ll appreciate you having saved them a bit of time, and you’ll increase the efficiency of the team by reducing the number of round trips the code needs to take between the developer and the reviewer.
If the change you are itching to make falls into the third category: stop. Do not touch the code. Instead, go back to your colleague and get them to describe their approach. Asking “why” might lead to a really interesting conversation about the merits of the approach taken. It may also reveal limitations of the approach to the original developer. By starting the conversation, you open yourself to the possibility that just maybe your way of doing things isn’t the only viable solution.
If you needed to make any changes to the code, they should be absolutely tiny and minor. You should not be making substantive edits in a peer review process. Make the tiny edits, and then add the changes to your local repository as follows:
git add . git commit -m "[#61524] Correcting <list problem> identified in peer review."
You can keep the message brief, as your changes should be minor. At this point you should push the reviewed code back up to the server for the original developer to double-check and review. Assuming you’ve set up the branch as a tracking branch, it should just be a matter of running the command as follows:
Update the issue in your ticketing system as is appropriate for your review. Perhaps the code needs more work, or perhaps it was good as written and it is now time to close the issue queue.
Repeat the steps in this section until the proposed change is complete, and ready to be merged into the main branch.
Merge the approved change into the trunk
Up to this point you’ve been comparing a ticket branch to the master branch in the repository. This main branch is referred to as the “trunk” of your project. (It’s a tree thing, not an elephant thing.) The final step in the review process will be to merge the ticket branch into the trunk, and clean up the corresponding ticket branches.
Begin by updating your master branch to ensure you can publish your changes after the merge.
git checkout master git pull origin master
Take a deep breath, and merge your ticket branch back into the main repository. As written, the following command will not create a new commit in your repository history. The commits will simply shuffle into line on the master branch, making
git log −−graphappear as though a separate branch has never existed. If you would like to maintain the illusion of a past branch, simply add the parameter
−−no-ffto the merge command, which will make it clear, via the graph history and a new commit message, that you have merged a branch at this point. Check with your team to see what’s preferred.
git merge 61524-broken-link
The merge will either fail, or it will succeed. If there are no merge errors, you are ready to share the revised master branch by uploading it to the central repository.
If there are merge errors, the original coders are often better equipped to figure out how to fix them, so you may need to ask them to resolve the conflicts for you.
Once the new commits have been successfully integrated into the master branch, you can delete the old copies of the ticket branches both from your local repository and on the central repository. It’s just basic housekeeping at this point.
git branch -d 61524-broken-link git push origin --delete 61524-broken-link
This is the process that has worked for the teams I’ve been a part of. Without a peer review process, it can be difficult to address problems in a codebase without blame. With it, the code becomes much more collaborative; when a mistake gets in, it’s because we both missed it. And when a mistake is found before it’s committed, we both breathe a sigh of relief that it was found when it was.
Regardless of whether you’re using Git or another source control system, the peer review process can help your team. Peer-reviewed code might take more time to develop, but it contains fewer mistakes, and has a strong, more diverse team supporting it. And, yes, I’ve been known to learn the habits of my reviewers and choose the most appropriate review style for my work, just like I did as a kid.
- Rachel Andrew on the Business of Web Dev: Getting to the Action
Freelancers and self-employed business owners can choose from a huge number of conferences to attend in any given year. There are hundreds of industry podcasts, a constant stream of published books, and a never-ending supply of sites all giving advice. It is very easy to spend a lot of valuable time and money just attending, watching, reading, listening and hoping that somehow all of this good advice will take root and make our business a success.
However, all the good advice in the world won’t help you if you don’t act on it. While you might leave that expensive conference feeling great, did your attendance create a lasting change to your business? I was thinking about this subject while listening to episode 14 of the Working Out podcast, hosted by Ashley Baxter and Paddy Donnelly. They were talking about following through, and how it is possible to “nod along” to good advice but never do anything with it.
If you have ever been sent to a conference by an employer, you may have been expected to report back. You might even have been asked to present to your team on the takeaway points from the event. As freelancers and business owners, we don’t have anyone making us consolidate our thoughts in that way. It turns out that the way I work gives me a fairly good method of knowing which things are bringing me value.
Tracking actionable advice
I’m a fan of the Getting Things Done technique, and live by my to-do lists. I maintain a Someday/Maybe list in OmniFocus into which I add items that I want to do or at least investigate, but that aren’t a project yet.
If a podcast is worth keeping on my playlist, there will be items entered linking back to certain episodes. Conference takeaways might be a link to a site with information that I want to read. It might be an idea for an article to write, or instructions on something very practical such as setting up an analytics dashboard to better understand some data. The first indicator of a valuable conference is how many items I add during or just after the event.
Having a big list of things to do is all well and good, but it’s only one half of the story. The real value comes when I do the things on that list, and can see whether they were useful to my business. Once again, my GTD lists can be mined for that information.
When tickets go on sale for that conference again, do I have most of those to-do items still sat in Someday/Maybe? Is that because, while they sounded like good ideas, they weren’t all that relevant? Or, have I written a number of blog posts or had several articles published on themes that I started considering off the back of that conference? Did I create that dashboard, and find it useful every day? Did that speaker I was introduced to go on to become a friend or mentor, or someone I’ve exchanged emails with to clarify a topic I’ve been thinking about?
By looking back over my lists and completed items, I can start to make decisions about the real value to my business and life of the things I attend, read, and listen to. I’m able to justify the ticket price, time, and travel costs by making that assessment. I can feel confident that I’m not spending time and money just to feel as if I’m moving forward, yet gaining nothing tangible to show for it.
A final thought on value
As entrepreneurs, we have to make sure we are spending our time and money on things that will give us the best return. All that said, it is important to make time in our schedules for those things that we just enjoy, and in particular those things that do motivate and inspire us. I don’t think that every book you read or event you attend needs to result in a to-do list of actionable items.
What we need as business owners, and as people, is balance. We need to be able to see that the things we are doing are moving our businesses forward, while also making time to be inspired and refreshed to get that actionable work done.
- 1. Have any favorite hacks for getting maximum value from conferences, workshops, and books? Tell us in the comments!
- 10 Years Ago in ALA: Pocket Sized Design
The web doesn’t do “age” especially well. Any blog post or design article more than a few years old gets a raised eyebrow—heck, most people I meet haven’t read John Allsopp’s “A Dao of Web Design” or Jeffrey Zeldman’s “To Hell With Bad Browsers,” both as relevant to the web today as when they were first written. Meanwhile, I’ve got books on my shelves older than I am; most of my favorite films came out before I was born; and my iTunes library is riddled with music that’s decades, if not centuries, old.
(No, I don’t get invited to many parties. Why do you ask oh I get it)
So! It’s probably easy to look at “Pocket-Sized Design,” a lovely article by Jorunn Newth and Elika Etemad that just turned 10 years old, and immediately notice where it’s beginning to show its age. Written at a time when few sites were standards-compliant, and even fewer still were mobile-friendly, Newth and Etemad were urging us to think about life beyond the desktop. And when I first re-read it, it’s easy to chuckle at the points that feel like they’re from another age: there’s plenty of talk of screens that are “only 120-pixels wide”; of inputs driven by stylus, rather than touch; and of using the now-basically-defunct
handheldmedia type for your CSS. Seems a bit quaint, right?
Looking past a few of the details, it’s remarkable how well the article’s aged. Modern users may (or may not) manually “turn off in-line image loading,” but they may choose to use a mobile browser that dramatically compresses your images. We may scoff at the idea of someone browsing with a stylus, but handheld video game consoles are impossibly popular when it comes to browsing the web. And while there’s plenty of excitement in our industry for the latest versions of iOS and Android, running on the latest hardware, most of the web’s growth is happening on cheaper hardware, over slower networks (PDF), and via slim data plans—so yes, 10 years on, it’s still true that “downloading to the device is likely to be [expensive], the processors are slow, and the memory is limited.”
In the face of all of that, what I love about Newth and Etemad’s article is just how sensible their solutions are. Rather than suggesting slimmed-down mobile sites, or investing in some device detection library, they take a decidedly standards-focused approach:
In other words, by thinking about the needs of the small screen first, you can layer on more complexity from there. And if you’re hearing shades of mobile first and progressive enhancement here, you’d be right: they’re treating their markup—their content—as a foundation, and gently layering styles atop it to make it accessible to more devices, more places than ever before.
So, no: we aren’t using
display: nonefor our small screen-friendly styles—but I don’t think that’s really the point of Newth and Etemad’s essay. Instead, they’re putting forward a process, a framework for designing beyond the desktop. What they’re arguing is for a truly device-agnostic approach to designing for the web, one that’s as relevant today as it was a decade ago.
Plus ça change, plus c’est la même chose.
- The bitterer the betterer.
As a taster, it's important to know that compared with sour or salty, bitterness is slow to affect our palates. The first two are very simple chemical phenomena and require only the simplest of cellular mechanisms to fire off their signals to the brain. Bitterness, like sweetness and umami, requires an intermediate molecule, something called a G-coupled protein. It takes a little longer to do its thing, and this time dimension of tasting is something that you always need to pay attention to.
- An enthusiastic public reading journal.....
- Super Intelligent Humans are Coming
- The story of the Mamas and the Papas, as "an epic tone-poem"
Mama Cass opened a live performance of Creeque Alley with the following: "Everywhere we go, people ask us how we got together. We got tired of answering that question, because everybody does ask us*.... John has written an epic tone-poem of historical nature describing our very get-together, and so we'd like to sing it for you now. Cue the tape." If it's a bit too fast for you to catch all the references, Creeque Alley (dot com) spells it all out line by line, thanks to "painstaking research, some guesswork and a lot of help from many people," including Richard Campbell and his official Cass Elliot website.
* This is an odd VJ remix of the performance, but it includes the introduction without any modifications. The second clip is the unmodified performance, without the intro.
- And if the guest wants to stay at the house, the house is there...
On 27 June 2014, Puʻu ʻŌʻō Crater of the Kīlauea Volcano in Hawaii started a new lava flow, beheading a previous flow. The flow headed northeast through the Puna district towards Pāhoa, passing right by the Kahoe Homesteads subdivision. At the moment the flow is stalled short of Apa`a Street in Pāhoa, but it could resume and ultimately cut the town in half. What to do?
Diverting lava has been tried in Hawaii several times in the past, but not with much success.
Last month, several public meetings were held in Pāhoa to discuss the flow. Hawaii County is not planning any diversion of the lava, and during the meetings civil defense administrator Darryl Oliveira explained why:
At this point, we're not exploring or pursuing doing a diversion, because of uncertainty as to whether it would work or actually make the problem worse. It could divert a flow into another subdivision; spare one and sacrifice or compromise another. And as I've said before, [we're] very sensitive to the cultural aspect of what the volcano represents in our communities.During the September 5th meeting, several Native Hawaiians stepped up to the microphone to explain their view of Pele and their opposition to diverting the lava.
One of them was Piilani Kaawaloa, whose family home has been spared by lava three times in the past 30 years:
It's like me telling you move the Moon because it's too bright... What we need to do is work together... not fight and tell whose house the lava should cover.Tim Sullivan, another resident of Pāhoa, has also blogged about the flow and comments on the cultural issues.
For those watching from afar, the Hawaiian Volcano Observatory is tracking the flow and provides status updates, maps and photos.
Previously, previously, and previously.
- "A master gambler and his high-stakes museum."
Walsh agreed to pay Boltanski for the right to film his studio, outside Paris, twenty-four hours a day, and to transmit the images live to Walsh, in Tasmania. But the payment was turned into a macabre bet: the agreed fee was to be divided by eight years, and Boltanski was to be paid a monthly stipend, calculated as a proportion of that period, until his death. Should Boltanski, who was sixty-five years old, live longer than eight years, Walsh will end up paying more than the work is worth, and will have lost the bet. But if Boltanski dies within eight years the gambler will have purchased the work at less than its agreed-upon value, and won. "He has assured me that I will die before the eight years is up, because he never loses. He's probably right," Boltanski told Agence France-Presse in 2009. "I don't look after myself very well. But I'm going to try to survive." He added, "Anyone who never loses or thinks he never loses must be the Devil."—Tasmanian Devil is the story of David Walsh and his Museum of Old and New Art in Hobart, Tasmania, as told by recent Man Booker winner Richard Flanagan.
- The author admits that he ought to know better
Nonsense Novels by Stephen Leacock. Hat tip to Kate Beaton's tumblr, where Nonsense Novels is also available as a pdf download from the NYRB, with an introduction by Daniel Handler.
Containing such thrilling tales as
Maddened by Mystery: or, The Defective Detective
"Sir," said the young man in intense excitement, "a mystery has been committed!"
"Ha!" said the Great Detective, his eye kindling, "is it such as to completely baffle the police of the entire continent?"
"They are so completely baffled with it," said the secretary, "that they are lying collapsed in heaps; many of them have committed suicide."
"Q." A Psychic Pstory of the Psupernatural
"What I mean is," said Annerly, "do you believe in phantasms of the dead?"
"Phantasms?" I repeated.
"Yes, phantasms, or if you prefer the word, phanograms, or say if you will phanogrammatical manifestations, or more simply psychophantasmal phenomena?"
Guido the Gimlet of Ghent: A Romance of Chivalry
IT was in the flood-tide of chivalry. Knighthood was in the pod.
The sun was slowly setting in the east, rising and falling occasionally as it subsided, and illuminating with its dying beams the towers of the grim castle of Buggensberg.
Gertrude the Governess: or, Simple Seventeen
When Gertrude was seventeen her aunt had died of hydrophobia.
The circumstances were mysterious. There had called upon her that day a strange bearded man in the costume of the Russians. After he had left, Gertrude had found her aunt in a syncope from which she passed into an apostrophe and never recovered.
To avoid scandal it was called hydrophobia. Gertrude was thus thrown upon the world. What to do? That was the problem that confronted her.
A Hero in Homespun: or, The Life Struggle of Hezekiah Hayloft
Next he applied for a job as a telegrapher. His mere ignorance of telegraphy was made the ground of refusal.
Sorrows of a Super Soul: or, The Memoirs of Marie Mushenough (Translated, by Machinery, out of the Original Russian.)
As he passed I leaned from the window and threw a rosebud at him.
But he did not see it.
Then I threw a cake of soap and a toothbrush at him. But I missed him, and he passed on.
Hannah of the Highlands: or, The Laird of Loch Aucherlocherty
At least once in every generation a McWhinus or a McShamus had been shot, and always at the turn of the Glen road where it rose to the edge of the cliff. Finally, two generations gone, the McWhinuses had been raised to sudden wealth by the discovery of a coal mine on their land. To show their contempt for the McShamuses they had left the Glen to live in America. The McShamuses, to show their contempt for the McWhinuses, had remained in the Glen. The feud was kept alive in their memory.
Soaked in Seaweed: or, Upset in the Ocean (An Old-fashioned Sea Story.)
Captain Bilge, with a megaphone to his lips, kept calling out to the men in his rough sailor fashion:
"Now, then, don't over-exert yourselves, gentlemen. Remember, please, that we have plenty of time. Keep out of the sun as much as you can. Step carefully in the rigging there, Jones; I fear it's just a little high for you. Tut, tut, Williams, don't get yourself so dirty with that tar, you won't look fit to be seen."
Caroline's Christmas: or, The Inexplicable Infant
John Enderby showed all the passion of an uncontrolled nature. At times he would reach out for the crock of buttermilk that stood beside him and drained a draught of the maddening liquid, till his brain glowed like the coals of the tamarack fire before him.
"John," pleaded Anna, "leave alone the buttermilk. It only maddens you. No good ever came of that."
"Aye, lass," said the farmer, with a bitter laugh, as he buried his head again in the crock, "what care I if it maddens me."
The Man in Asbestos: An Allegory of the Future
It seemed unfair that other writers should be able at will to drop into a sleep of four or five hundred years, and to plunge head-first into a distant future and be a witness of its marvels.
- "...to stay conscious and alive, day in and day out."
Endnotes: David Foster Wallace, BBC Documentary.
- A collection of various interviews, videos, & recordings from and about David Foster Wallace.
- David Foster Wallace uncut interview (11/2003) German television station, ZDF.
- Speech: 'This is Water' given by David Foster Wallace to Kenyon College's 2005 graduating class.
- David Foster Wallace's interview with Leonard Lopate on WNYC.
- Charlie Rose interviews David Foster Wallace Part 1, Part 2, Part 3, Part 4.
- Last Bookworm interview with David, discussing the superlative essay collection "Consider the Lobster."
- David Foster Wallace on Gen X, "Infinite Jest" and a life of writing (1996)
- Amy Wallace speaks about her brother David Foster Wallace.
- Rereading David Foster Wallace - The New Yorker Festival - The New Yorker
- Zadie Smith's essay "Brief Interviews with Hideous Men: The Difficult Gifts of David Foster Wallace," from her 2009 collection Changing My Mind: Occasional Essays. Part 1, Part 2, Part 3, Part 4, Part 5, Part 6, Part 7, Part 8, Part 9, Part 10, Part 11, Part 12.
- carne vera sacra
Venerated Members - Europe's History of Penis Worship
Many relics circulated through Europe, especially as a result of the Crusades. The True Cross, the Spear of Longinus, the Shroud of Turin. But possibly strangest among these was The Holy Prepuce, or The Holy Foreskin, as Jesus was certainly circumcised. It then became the object of theological debate, reliquary cults and Italy's oddest town.
Of course, there are hints and shadows of prior penis worship in Europe, and around the world[PDF]. Perhaps the mural of Massa Marittima is not just 'obscene.'
For more, older, reading, try A Discourse on the Worship of Priapus Sacred Texts, Archive.org), or The Worship of Generative Powers (Sacred Texts, Google Books), combined as Sexual Symbolism: A History of Phallic Worship. Or visit the museum.
previously on MetaFilter
- Family Planning: The short, long and speculative issues
Some interesting recent links on family planning in the short, long and speculative senses.
- Catherine Rampell examines the "information gap" surrounding birth control and family planning amongst young people with lower levels of education.
- Sarah Perry examines the history of fertility transitions over the last 300 years.
- Carl Shulman and Nick Bostrom examine the potential effects of human genetic selection in the next 50 years.
Catherine Rampell: Twenty-somethings with college degrees report using birth control much more consistently than people with no more than a high school diploma. People with less educational attainment were also much more likely to say they know little or nothing about condoms and the pill. And the amount of mistrust, misinformation and old-wives'-taling about birth control was astounding among the less educated, though not wholly absent among college grads.
Sarah Perry: European cultures have historically prevented people from restricting family size within marriage......A new pattern, allowing for controlled fertility within marriage, simultaneously originated in New England and France in the late eighteenth century. The new pattern traveled with a new set of values, including suffrage, democracy, equality, women's rights, and social mobility. Its main mechanism of spread was education, the availability of which also incentivized the new fertility pattern's adoption by providing a clear way for parents to compete for the future status of their children by having fewer children...The benefits of the new pattern are increased material wealth per person, a reduction in disease, starvation, and genocide, and upward social mobility. The main drawback is the onset of a dysgenic phase.
Carl Shulman and Nick Bostrom :we analyze the feasibility, timescale, and possible societal impacts of embryo selection for cognitive enhancement. We find that embryo selection, on its own, may have significant (but likely not drastic) impacts over the next 50 years, though large effects could accumulate over multiple generations. However, there is a complementary technology – stem cell-derived gametes – which has been making rapid progress and which could amplify the impact of embryo selection, enabling very large changes if successfully applied to humans.
- "Look! Sister Mary Lydia, look. There's a fireball out there."
"Only the pen of a Dante could do justice to the sights and sounds that occurred in the St. Clair-Norwood neighborhood that hellish afternoon." Tomorrow marks the 70th anniversary of the East Ohio Gas Explosion, "when fire rained down and streets literally collapsed." Three above ground tanks holding liquified natural gas leaked, caught fire, and exploded, leveling one square mile in Cleveland and killing 130 people. It also served the backdrop to local author Don Robertson's beloved novel The Greatest Thing Since Sliced Bread, which follows the adventures of a nine-year-old boy on that day.
- Douchebag: The White Racial Slur We've All Been Waiting For
I am a white, middle class male professor at a big, public university, and every year I get up in front of a hundred and fifty to two hundred undergraduates in a class on the history of race in America and I ask them to shout white racial slurs at me. The results are usually disappointing.
Now I gotta get to another class half-way across campus, so I don't have time to tell them that so-called "liberal guilt" is not the answer and that empathy and solidarity are. I don't have time to explain that learning to share anger at injustice is the start of a common conversation, and that they can learn how to recognize where privilege resides in their own lives by reading about and listening to the experiences of others who do not have it. But I gotta run, so I just say to them: "It's a long argument, and an endless series of principled choices, but the short version is simply: don't be a douchebag."
- Alzheimers Insiders
How a doctor, a trader, and the billionaire Steven A. Cohen got entangled in a vast financial scandal.
As Dr. Sid Gilman approached the stage, the hotel ballroom quieted with anticipation. It was July 29, 2008, and a thousand people had gathered in Chicago for the International Conference on Alzheimer's Disease. For decades, scientists had tried, and failed, to devise a cure for Alzheimer's. But in recent years two pharmaceutical companies, Elan and Wyeth, had worked together on an experimental drug called bapineuzumab, which had shown promise in halting the cognitive decay caused by the disease. Tests on mice had proved successful, and in an initial clinical trial a small number of human patients appeared to improve... There would be huge demand for a drug that diminishes the effects of Alzheimer's. As Elan and Wyeth spent hundreds of millions of dollars concocting and testing bapineuzumab, and issued hints about the possibility of a medical breakthrough, investors wondered whether bapi, as it became known, might be "the next Lipitor." Several months before the Chicago conference, Barron's published a cover story speculating that bapi could become "the biggest drug of all time."
"The consensus Wall Street estimate for bapi sales had been about $1 billion annually if the trials had succeeded."
After the bapi clinical trials, "Pfizer and J&J ... revealed that bapi failed to beat placebo in helping patients with mild or moderate disease with their memories in two late-stage studies."
The Corruption of Sid Gilman
The Doctor At The Center Of The Insider Trading Scandal
Dossiers from Bloomberg Businessweek on SAC Capital and SAC's successor organization, Point72 Asset Management
- "Tell me if you hear the fence rattling..."
Life Academy of Health and Bioscience is a small public high school in Oakland, California. In 2011 a small group of student poets evolved, calling themselves "Rapid Fire" . "At a recent "spoken word" event, senior Monica Mendoza performed her poem "Faggot" . With steady determination backed up by thoughtful research, Mendoza explained why people should never use the word. Her crescendo invoked the names of young gay men who lost their lives because of their sexuality. "Every time you use the word faggot...tell me if you hear Bobby Griffith's prayers begging for God to forgive him for being gay/ tell me if you heard the truck smash him to death.../ tell me if you hear the fence rattling after Matthew Shepard was tied and tortured." (The original article in the Southern Poverty Law Center's "Teaching Tolerance" project site)
- Nyeah nargh eeah fwa fwa
Cursors is a fascinating maze game where you have to cooperate with others with very limited ways of communicating.
- On Sewing as a Universal Language
Cousu Main (which starts here) is an adaptation of The Great British Sewing Bee, and the blog of one of the participants features significant spoilers for this season. Although it's in French, the show is not hard for an English speaker to follow, just as Project Runway Vietnam (2013: 1 2 3 4 5 6 7 8), Project Runway Korea (2009: 1 2 3 4 5 6 ...), and Projeto Fashion from Brazil--among others--make some sense to those familiar with the English-language series Project Runway Australia, Project Runway Canada, Project Runway Malaysia (2007 finale: 1-5 and 6), Project Runway Philippines (2008: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15), and Mission Catwalk from Jamaica.