Frictionless Evernote and the Inbox Notebook

December 26th, 2012

Most folks that know me have, at one time or another, been subjected to my adulation for a pretty fantastic service, called Evernote. For those unfamiliar with Evernote, it’s basically a badass digital scrapbook in “the cloud”. And while there’s loads to be said about what it is and how it can be used, that’s not what this post is about. I’m assuming here that you’re already sold on (or perhaps even merely exploring) how this service can add facilitate your information management.

And so I’d like to share a little storage workflow tip I’ve been using to keep the organization of the information I store into it a manageable affair:

In short, I have a notebook which I’ve called .Inbox and set it to my default notebook.1 I toss whatever content I’ve decided to keep in Evernote there, and worry about making the notebooks and tags decisions “later”.

This default notebook basically functions just like a GTD “Inbox”.

I’ll review the .Inbox notebook periodically to tag everything properly and shuffle it off into the appropriate notebook (or perhaps delete it, if it was meant to be temporary). In my case, this review happens about every couple of weeks, but you may instead find it a better fit to do so monthly or daily, depending on your own preferences and needs.

Granted, choosing a notebook and tags for each note typically takes only 10 – 20 seconds, so you may be wondering: why even bother deferring it?

Sometimes I’ve simply got a reading / discovery momentum that I don’t wish to interrupt. I also capture printed receipts, which happens on my way out of restaurants or stores. I also snap photos of the labels of particularly good amaros when I’m out with friends, and don’t want to spent time fidgeting with notebooks or tags then and there.

Ultimately, using this Inbox lets me save my note without interruption and move on, resulting in more actual notes being taken.

Footnotes

  1. Using a leading `.` makes the notebook appear as the first notebook when Evernote lists them alphabetically.

Cafe Frizzante

April 27th, 2012

It’s high time for my blog’s first recipe. It’s a simple one, to make fizzy espresso. Before you start, you’ll need:

Once you’ve got the necessary supplies on hand, here’s your workflow:

  1. Fill Soda Stream bottle with room temperature water (to the “fill line”).
  2. Attach bottle to the Soda Stream soda maker
  3. Press the “fizzer” button until the machine buzzes, then let go
  4. Press the button again until it buzzes once more, then let go
  5. “Burp” the bottle by unscrewing it until it releases its inbuilt pressure (you’ll hear a satisfying hiss), but do not remove the bottle
  6. Screw the bottle back onto the the Soda Maker to create your seal again
  7. Press the “fizzer” button once more, once again until it makes a buzzing sound, then release
  8. Remove the bottle from the Soda Maker and cap it. Do not refrigerate; leave at room temperature.
  9. Make your coffee (I recommend a double espresso).
  10. For each 1.5 oz of liquid in your coffee, add a tablespoon of the over-fizzed water you made earlier.1

And that’s about it. Just remember, though, that if you add more hot water (for an Americano) or milk (for a latte, cappuccino, etc), you’ll need to add some more of that fizzy water – one tablespoon per 1.5 ounces of pre-fizzed drink.

Enjoy.

Footnotes

  1. A double espresso is a 1.2 to 2 ounce espresso made from 12 to 20 grams of coffee

Responsive Breakpoints and Goldilocks

April 21st, 2012

The practice of defining “breakpoints” (of screen sizes) is a commonly-used mechanism the Responsive Design world as a mechanism for determining what sort of styling rules, experience enhancements, and even types of assets ought to be delivered to users viewing site content with their particular device.

I’ve typically seen these breakpoints chosen with iOS devices in mind: 320px and up for handheld, 786px and up for tablets, and then 1280px and up for laptops and desktops.

But there are loads of devices being shipped by a variety of manufacturers, whose phones are larger, and whose tablets are smaller. And let’s not overlook the possibility that the number of devices will only continue to grow into the future, even if all you might claim to care about is Apple’s iOS mobile devices (which, IMO, isn’t any smarter than slapping a “Best Viewed in Netscape 4″ badge on your website).

So, how better to think of these breakpoints?

One might suggest that a “theoretically pure” approach would be do consider the physical size of the display and to offer up a design and assets that would be experienced well, given that space.

The trouble is that devices have different pixel densities. And before we get all excited about the fact that CSS also supports real-world sizing units, like inches and millimeters, I regret to have to report that these aren’t reliable, either. Even iOS devices, whose screen sizes are extraordinarily well-defined, render real-world measurement units at 3/4 size of the desired sizing (i.e., a 2-inch wide <div> rendered on an iPad measures 1.5 inches with a ruler).

In case you want to check this claim out for yourself:

Inches

3″ x 1″

Centimeters

8cm x 2cm

Milimeters

80mm x 20mm

Purity, as you can see, is rarely practical.

Still, the general goal of matching up layout styling rules appropriately for a given display size is a sound one. And so I’m thinking a more pragmatic approach with similar goals is in order.

Devices and Their Imperfect Pixels

For better or worse, the digital design world uses pixels as the de facto measurement unit. Now, anyone with a lick of experience with digital design (whether it’s creation, implementation, or troubleshooting) will immediately realize that there’s loads of pixel size variety between devices. Very roughly, PC screens have an average pixel resolution of 100ppi, while mobile devices have a resolution of about 160ppi. 1

This is in no small part thanks to the fact that the original iPhone was a 160ppi device, and to Google’s “device independent pixel” work (also called “dp” or “dip”), which notes:

The density-independent pixel is equivalent to one physical pixel on a 160 dpi screen, which is the baseline density assumed by the system for a “medium” density screen.

So, given this loose “pixel-standard” sizing practice, I hunted around for some reference that outlined the screen sizes of some common mobile devices, and found this handy already-year-old matrix from UX Booth, reproduced here for convenience:

Resolution Devices
320×240
  • Blackberry Devices: Curve 8530, Pearl Flip
  • Android Devices: Motorola Charm, Sony Ericsson Xperia X10 Mini, others
  • Symbian OS Devices: Nokia E63, others
320×480
  • Apple OS Devices: iPhone, iPod
  • Android devices: HTC Dream, HTC Hero, Droid Pro, i7500 Galaxy, Samsung Moment, others…
480×360 Blackberry Devices: Torch, Storm, Bold
360×640 Symbian OS Devices: Nokia N8, Nokia C6-01, others
480×800
  • Android Devices: Liquid A1, HTC Desire, Nexus One, i9000 Galaxy S, others
  • Maemo (Linux) Devices: Nokia 900, others
  • Windows Mobile 6 Devices: Sharp S01SH
  • Windows 7 Phone Devices Venue Pro, Samsung Omnia 7, HTC 7 Pro, others
768×1024 iPad
640×960 iPhone 4
1280×800
  • Android Devices: Motorola Xoom, Samsung Galaxy Tab 10.1
  • Windows OS Devices: Asus Eee Pad EP121
  • Apple OS Devices: Axiotron Modbook

Please note that this is a very limited list, and is by no means complete. What is important to take from this data is that a wide range of screen resolutions are out there, and new devices are introduced constantly.

Starting with this survey of device screen sizes, let’s loop back to the idea of using layout capabilities afforded by the display area as the central concern in identifying our breakpoints: which display width offers “enough room” for a image floats, and which for multiple columns of content?2

Next, let’s keep the number of breakpoints practical, for each one introduces some amount of development and maintenance overhead. We’ll need to balance “enough to be useful” vs. “too many to manage”.

Here are my recommendations:

(unknown)
Presume “handheld” / feature phone. Single column. No floats. Serve images at their smallest dimensions (up to 64px wide, usually best served at around 40px-ish).

300px
Handheld-ish device. Single column. No image floats in primary content. Serve images at a “medium” dimension (300-400 px wide), allowing users to “pinch” to zoom in, etc. Tile ads OK.

720px
Tablet-ish screen size. Honor image floats in primary content. Larger images. Still presenting primary content in a single column (i.e., not hoisting sidebar alongside primary content, but potentially giving the sidebar section two columns?). Banner and Tile ads OK.

960px
Desktop-ish screen size. Full-sized images. Hoist sidebar alongside primary content. Any ad size.

1280px
Desktop “plus”. Consider adding supplemental stuff off to the right, like how Facebook automatically adds user chat to the right for wider screens… might be real-time social activity.

Last Thoughts

It’s worth mentioning that the above breakpoints were defined considering layout capabilities, and do not apply to other facets of potential experience “enhancement” considerations, like whether a sequence of images is best presented as a list or a horizontal carousel, which is much more a consideration left to device capabilities such as JavaScript and/or support for swipe gestures.

And don’t forget the idea of 2x image assets for high-pixel density devices (like the new iPad, with its Retina Displayâ„¢), which starts to open the bandwidth measurement can of worms.

What do you folks think of these breakpoints?

Materials

Here are some related materials that inspired this exploration:

And some tools I had in mind:

Footnotes

  1. There are _lots of caveats_ here, but in the world of web design, there’s enough truth in that to serve as a workable fundamental understanding.
  2. Because we’re exploring web design, let’s identify the primary dimension for our breakpoints consideration to be width, since “height” is often best considered “infinite” on web pages.

New iPad Totally Awesome, but Two LTE Gotchas

March 25th, 2012

Like much of the rest of the folks who bought one, I’m really loving the New iPad (aka “iPad 3″). Yes, it really is that wonderful: instantly responsive and fluid, wonderful screen, and even without Siri, its dictation feature is a real boon when I’m writing small bits of text (for me, that’s when I’m marking up PDFs or design comps with feedback for work).

When I got the first one, in 2010, I wasn’t sure what sort of mileage I’d give it. While I was excited to get my hands on one, I didn’t really know how much I’d wind up using it after the “honey moon” was over, and it was no longer that shiny new thing. So I decided to exercise some fiscal restraint and pick just one of the available options for “upgrade” from the base model, between the storage bump (32G instead of 16G), or the mobile data option. Since I had already been happily using my iPhone heavily for on-the-go data access, I decided the extra storage space would give my buck the biggest bang.

Turns out I used the shit out that iPad over the two years that followed, but definitely come to missing the data connection option that I had eschewed on several occasions.

I skipped on the iPad 2, in part for fiscal discipline, but also because I had used the iPhone 4 and knew a Retina Display would be the irresistible iPad upgrade lure. Thank goodness that the 2012 iPad had delivered on that display, too, because the newer versions of all my apps had already started putting a noticeable strain on the CPU and RAM on my two-years-old iPad.

I put my order in for this iPad the day it was announced.

This time, I knew I’d use this thing heavily for the next few years, so I opted to max out the storage at 64G, and — being sick of AT&T’s shitty signal coverage on my iPhone — I opted for the model that supported Verizon’s LTE mobile data option.

Everything about this new iPad is brilliant, and I’m really enjoying having on-the-go data.

I won’t belabor the points about the smooth performance and the dazzling screen you’ll already have learned from other sources. They are indeed that good.

If you’re wondering whether I recommend you get one, the answer is almost certainly “yes”. If you’re similarly struggling, as I had previously been, between the getting additional storage or mobile data, I’m going to say most people would be better served by the mobile data option.

I do have two points of mild warning, however, both regarding the mobile data option:

  1. My high hopes for Verizon’s data coverage have been tempered by actual usage. Neither my iPhone’s AT&T 3G connection, nor my iPad’s Verizon LTE coverage gives me a lick of actually-usable mobile data throughput in my Time Square office. Both devices show many bars of “signal strength”, but trying to actually load content over the mobile data network gets me nothing but quality time with “Loading” spinners. While one can simply write off Time Square as a dead zone for data (as well as the prospect of finding any good food, or much else of particular value, frankly), I’ve also had data timeouts in other places in NYC’s boroughs. Perhaps I’d set my hopes too high, or maybe Verizon just had a surge in the load of new iPad-owner LTE traffic that they’ll adjust to over time. Time will tell.

  2. When data throughput is available on the Verizon LTE connection, it is very (very) fast. When coverage is available, my iPad’s Verizon LTE service is faster, in many cases, than the WiFi connection at my house which has a Verizon DSL Internet connection. So here’s the “warning” part of this point: be very wary of streaming any video content. Most video delivery services are designed to deliver lower-quality video to lower-bandwidth connections, and higher-quality video to higher-bandwidth ones. The trouble here is that the LTE connections are so fast that video streaming services deliver the highest bit-rate renditions of videos to your iPad. This will tear through the allotted data quota of whatever data package you’ve subscribed to far faster than you’d expect.

In any case, I’m delighted to be able to pull the latest posts from my RSS subscriptions into Mr. Reader while waiting for my coffee and eggs at brunch, or to review the tasks I’ve got in OmniFocus and ensure I’m working the latest sync of my data.

Greener Than Expected

August 8th, 2011

I just picked up one of the new Mac Minis that Apple released this summer, which added the Thunderbolt port, dropped the optical drive, and ship with the new Intel i5 and i7 CPUs. Given that this is largely nerd talk, most of that isn’t particularly germane to the story, except the new processors, which were originally designed for laptop use (they’re also shipping in the new Thunderbolt-equipped MacBook Airs).

This computer replaced a five year-old Mac Pro; the very first tower Mac that Apple shipped with Intel CPUs.

In a nut, the Mac Pro had been a trusty computer, and actually still works splendidly. The only trouble is that it takes up a bunch of floor space, guzzles electricity, and – most painfully, during a NYC summer – kicks off a ton of heat. I was looking to lower all three profiles.

As part of my intention to use less electricity, I also picked up a Digital POWERCENTER 650G “GreenPower” surge protector, by Monster. It looks like this:

For anyone unfamiliar with Monster’s “GreenPower” products, they describe the line as follows:

Monster GreenPower™ is a revolutionary new way to automatically reduce energy waste and save you money. Simply plug your computer into the GreenPower Control socket. When it’s turned off or goes to sleep, the other GreenPower sockets switch off, automatically eliminating energy wasted by peripherals, like your monitor and scanner, when you’re not using them. When your computer turns back on, the GreenPower sockets automatically power up again.

The gist is that the surge protector has one “master” socket (labeled above as “computer”), into which you’re meant to plug the “primary device”, and a number of “subordinate” sockets (for the various accessories attached to the computer), which only get juice when the device on the “master” socket is consuming 17 Watts or more of power1. The surge protector also has a single “independent” socket (labeled “modem” in the photo above), into which you can plug a device that isn’t part of the “master / subordinate” equation.

So, following the direction suggested by the labeling, I plugged the Mac Mini into the “master” socket, and plugged the monitor, printer, speakers, and USB hub into the “subordinate” sockets. The “independent” socket remained unused.

Then I turned the computer on. The “subordinate” devices remained off for a few seconds. But once the startup process was in full swing, the monitor came to life, I heard the printer begin to do its “wakeup dance”, and the speakers popped as power flowed to them!

Then the login screen came up, and the monitor, et al lost power.

I figured that I just needed to get past the login and start using the computer, and that this would keep everything juiced up. So I typed my password and hit the ENTER key. Immediately the monitor came back to life, the printer did it’s initialization dance, and the speakers popped to life again, while the Finder launched, my “startup items” got spawned, and Lion restored my application state from before I had shut the computer down in order to replace the old surge protector.

As I reached to the trackpad, however, the monitor and the rest of the devices plugged into the “subordinate” sockets all shut off again; the Mini simply did not consistently draw enough power to meet the 17 Watt minimum required from the “master” socket in order to activate its subordinates.

Remember when I mentioned that i5 Intel CPU back at the top? Apparently they are particularly energy efficient.

No wonder Apple put ‘em in the Air.

I started to wonder, however, if I’d just bought some new but utterly useless thing, destined to merely collect dust.

So I plugged the monitor into the still-free “independent” socket, and managed to safely shut the computer down again. While was clear to me that the Mini couldn’t be the device plugged into the “master” socket, it simply wasn’t safe to plug it into the “subordinate” sockets. So it had to take the “independent” socket, while some other device was to be used to drive the “master” socket.

At first, I tried plugging the monitor into the “master”. It seemed like a reasonable selection, given that putting the computer to sleep would cut the video signal, hence putting the monitor into standby mode.

Switching it on, I learn the monitor had no trouble driving the master socket. At all.

But now I had two devices (the Mini and the Monitor) which would be sipping power 24 hours a day, even in standby mode. My savings were diminishing. I also keep a clamp light next to my desk, which I always turn on when I’m using the computer. I’ve presently got a 60W bulb in it; which uses far more than the 17W minimum required to drive the “subordinate” accessories. And it draws ZERO Watts when switched off.

And so I’d found the winner.

So, in the end, the device powering setup on the surge protector looks like this:

Master Lamp (60W on; 0W off)
Subordinates Monitor, Printer, Speakers, USB hub (various power consumption rates)
Independent Mini (apparently mostly south of 17W on; ~4W in sleep)

There is also no longer a Mac Pro on the floor, claiming 6″ of space between the wall and my desk; and the corner remains much cooler, letting me run the AC at lower levels.

Footnotes

  1. I couldn’t find this official information on Monster’s website, but here’s a Google search which should offer up some reference material for the curious