I was one of the first visual artists I know to get an art portfolio website. A student at PNCA did it for me as a project back in 1998. And he did a nice job, for what was available in web technologies in 1998. Then I learned how to make websites myself, and took charge of remaking my portfolio site. The second iteration used Frames (!) and what I thought at the time was an innovative horizontal scrolling navigation. Silly me. (This was before jQuery and all that good stuff, of course.) Then I did iteration #3 in Flash. Which was nice for a few years, but I won’t waste space on why Flash is no longer a good choice for making websites. Besides, the background was black (remember when that was sorta cool on a website?) and the images were small (remember when you had to take dialup users into account?). After a while I stopped mentioning my art portfolio site because it was getting a little embarrassing.
But this winter has seen the great makeover, and now I’m proud to announce the launch of iteration #4 of my art portfolio website, built in HTML5, and all clean and nice looking.
“…the emotional pain (schmerz comes from a German word meaning “pain”) caused by difficult interactions with electronic gadgets or unhelpful websites. If you’ve ever felt your cellphone was out to get you, you’ve suffered from technoschmerz.”
This word is long overdue. (Thank you, Kate Greene, for the coinage.) I expect to use it often. If I take my personal experiences with digital technologies (and in running a web design company, I have plenty) and multiply these by the number of people in the world with dependent connections to contemporary digital gadgets, I can only imagine the amount of confusion, delay, errors, and the resulting stress from wrestling with technologies that keep changing, or don’t work intuitively or correctly must be global and massive. Has anyone analyzed the overall cost of this? I wonder what it would amount to when weighed against the overall benefits…
Bruce Sterling's chart of technological adaptation
Bruce Sterling In his book Shaping Things (one of my all-time favorite books, btw) examines the evolving interplay between objects and people. He divides the technosocial realm into several epochs, beginning with ARTIFACTS, which are hand-made, muscle-powered objects, such as spears. Then moving to MACHINES, which are artifacts with moving parts that rely on a non-human, non-animal power source, and require an infrastructure of engineering, distribution, and finance. Think steam engines. Next up is PRODUCTS, and they are mass-produced, non-artisinal, widely distributed, and operate over continental economies of scale. Think blenders. Since 1989 we have been in the age of GIZMOS, according to Sterling. Gizmos are
“…highly unstable, user-alterable multi-featured objects, commonly programmable, with a brief lifespan. Gizmos offer functionality so plentiful that it is cheaper to import features into the object than it is to simplify it. Gizmos are commonly linked to network service providers; they are not stand-alone objects but interfaces.
Unlike artifacts, machines and products, gizmos have enough functionally to actively nag people. Their deployment demands extensive, sustained, interaction, upgrades, grooming, plug-ins, plug-outs, unsought messages, security threats,…
Sterling goes on to argue that we are moving into the epoch of SPIMES, which are already among us in primitive forms such as the RFID tag. But that’s a topic for another post. For now, GIZMOS are enough to deal with. And according to Sterling, we have long passed the Line of No Return on them. This is the moment when a revolutionary technology becomes the status quo, and a culture has become so reliant that it cannot voluntarily return to the previous technosocial condition, at least not without social collapse.
And dependent we are. Not just on the objects, but the networks that connect them. IMAP email that shows up at home, work, on my iPad, on my iPhone. Dropbox files that do the same. Writeroom for synched notes, BaseCamp for synched project management, FreshBooks for synched book-keeping. Compared with how I managed files and communications a mere two or three years ago, a revolution has taken place in my personal life, and I know it’s been mirrored in the lives of many.
Infographic by Randy Krum, coolinfographics.com
We are firmly in the age of the GIZMO. Thus I pledge allegiance to the new overlords, and I interact, upgrade, groom, and protect them from security threats whenever they demand it. Because if I fail to nurture these overlords, I become invisible and mute to anyone not standing directly in front of me!
Data-visualization virtuosos Fernanda Viegas and Martin Wattenberg create a hybrid “artform” (for lack of a more inclusive term) out of data sets. Straddling the realms of science, design, art, and exploration, these graphics reveal interesting patterns in data.
“Data visualization has historically been accessible only to the elite in academia, business, and government. But in recent years web-based visualizations–ranging from political art projects to news stories–have reached audiences of millions. Unfortunately, while lay users can view many sophisticated visualizations, they have few ways to create them.
To “democratize” visualization, and experiment with new collaborative techniques, we built Many Eyes, a web site where people may upload their own data, create interactive visualizations, and carry on conversations. The goal is to foster a social style of data analysis in which visualizations serve not only as a discovery tool for individuals but also as a means to spur discussion and collaboration.”
Carbon footprint of a Big Mac, by Tim Fiddaman
Visualizing data that isn’t normally visualized, or is presented in a new way, tells us different stories about the world. From a kid counting all the socks in his household, to trends in editing wikipedia, to a “social network” of the characters in the bible, Many Eyes shows us new patterns that hadn’t been noticed before.
Wattenberg and Viegas now work with Google on a project called the Big Picture Visualization Group in Cambridge, MA, with the goal of making visualizations available to regular people via Google.
All websites require planning—that’s so true it’s almost a tautology. But some websites require more planning than others. Blue Mouse Monkey is enjoying an influx of opportunities to overhaul large complex websites, and I’ve been in super-planning mode the last couple of weeks.
As Steve Jobs says, design is often mistakenly ascribed to how something looks, but it’s really about how it works. It’s my job as a web designer to integrate the “how it looks” and the “how it works” according to many factors. There are several useful terms to describe this type of thinking, such as information architecture, interaction design, user experience design, and website architecture.
Historically the term “information architect” is attributed to Richard Saul Wurman, who saw it as the “creating of systemic, structural, and orderly principles to make something work”.
INFORMATION ARCHITECTURE is the categorization of information into a coherent structure, preferably one that the most people can understand quickly, if not inherently.
Understanding how a typical user will experience a decision a website asks them to make (e.g. click on link ‘X’ to access information ‘Y’) takes empathy. It’s the ability to put oneself in the user’s shoes — the user being someone who isn’t nearly as familiar with the website’s content or purpose as my client or I are.
USER EXPERIENCE DESIGN most frequently defines a sequence of interactions between a user (individual person) and a system, virtual or physical, designed to meet or support user needs and goals, primarily, while also satisfying systems requirements and organizational objectives.
Typical outputs include:
Site Audit (usability study of existing assets)
Flows and Navigation Maps
User stories or Scenarios
Persona (Fictitious users to act out the scenarios)
Site Maps and Content Inventory
Wireframes (screen blueprints or storyboards)
Prototypes (For interactive or in-the-mind simulation)
Written specifications (describing the behavior or design)
Graphic mockups (Precise visual of the expected end result)
When I plan a website I do all these things, except the Persona one, because that’s more applicable to game design. However, we bring in a focus group to give feedback on nearly-completed websites, so in a sense we have real users acting out the experience of the site.
WEBSITE ARCHITECTURE is an approach to the design and planning of websites which, like architecture itself, involves technical, aesthetic and functional criteria. As in traditional architecture, the focus is properly on the user and on user requirements. This requires particular attention to web content, a business plan, usability, interaction design, information architecture and web design. For effective search engine optimization it is necessary to have an appreciation of how a single website relates to the World Wide Web.
Since web content planning, design and management come within the scope of design methods, the traditional vitruvian aims of commodity, firmness and delight can guide the architecture of websites, as they do physical architecture and other design disciplines. Website architecture is coming within the scope of aesthetics and critical theory and this trend may accelerate with the advent of the semantic web and web 2.0. Both ideas emphasise the structural aspects of information. Structuralism is an approach to knowledge which has influenced a number of academic disciplines including aesthetics, critical theory and postmodernism. Web 2.0, because it involves user-generated content, directs the website architect’s attention to the structural aspects of information.
Then there’s the issue of users with different levels of familiarity with the Web. Unlike printed forms of communication such as books, newspapers, magazines and brochures, the Web is not something the majority of the population grew up with. Kids today are “digital natives“, but there are plenty of us still around who are “digital immigrants”.
An analogy is our knowledge of The Book. We all know how to read a book, so much so we barely register it as a type of knowledge. We understand the heirarchy of cover, title, table of contents, parts, chapters, appendices, index. We don’t have to consciously remember where to begin, or in what order to experience the content, because we learned that stuff on our mother’s knee. Well, maybe not appendices and indices, but by the time we’re reading those kinds of books, we have a solid framework to slot those categories into. But the Web? We’ve had to learn that as adults. And it’s so new it’s barely been standardized. No wonder many people find websites (and computers in general) frustrating. Humankind has been tossed into a new way of organizing and accessing information, and our brains, accustomed to one method, have had to adapt to another. Not unlike like the Mediaeval monk who has to be taught how to transition from scrolls to a bound book in this comedy sketch.
Not that I’m complaining. Much like how the invention of the printing press led to the spread of liberalism, the Internet communications revolution challenges many traditional structures of knowledge and information by removing gatekeepers to access and expression.
Time for me to get back planning more website architecture. There’s information to organize!
“You know, before we came up with Noveller, we had all these friends creating these great 75,000- to 300,000-word works of fiction, but there was no quick, easy, fun way to share them,” cofounder Chuck Gregory said. “To be honest, we were stunned there wasn’t already anything like it out there. It seemed so obvious.”
Those who Novel on a daily basis claim to love the challenge of the utility’s 140-page minimum. “I think everyone has at least one Noveller post in them,” said MIT computer networking expert Rod Baines, who noted that he had just posted a sprawling, nuanced, multigenerational family saga while shopping that afternoon. “And half the fun is just following other people’s Novels…”
www.says-it.com is a website where you generate an image of a soda can, poster, church sign, official seal, or an number of other mediums of ‘official’ expression with your own text. It’s pretty hard to stop making these things.
Check out Wordle, an online toy that lets you generate word clouds from text you input. You can then tweak the colors, fonts, layouts any way you like. This one is made from T.S. Eliot’s poem Ash Wednesday. (Click to view larger.) It’s cool that ‘word’ is the biggest word. And that ‘blue’ is white, and ‘white’ is ecru.
So what I’m doing now, i.e. creating a blog post, could have been different in a fundamental way, if Ted Nelson had insisted on not letting his model of Hypertext get dumbed down during a project her worked on in 1968?
Okay, here’s some context: Ted Nelson is an American inventor, software designer, usability consultant, systems humanist and visiting Fellow at Oxford. He is best known for coining the terms “hypertext” and “hypermedia”, and pursuing a vision of world-wide hypertext from the early 1960s. According to Ted Nelson’s Wikipedia entry, “The main thrust of his work has been to make computers easily accessible to ordinary people. His motto is: A user interface should be so simple that a beginner in an emergency can understand it within ten seconds.” (Wouldn’t that be wonderful?)
According to a page on NewMedia History by Bill Atkinson, Ted Nelson was “one of the most influential figures in computing”, “on a quest to build creative tools that would transform the way we read and write”.
Nelson was particularly concerned with the complex nature of the creative impulse, and he saw the computer as the tool that would make explicit the interdependence of ideas, drawing out connections between literature, art, music and science, since, as he put it, everything is “deeply intertwingled.”
Nelson’s critical breakthrough was to call for a system of non-sequential writing that would allow the reader to aggregate meaning in snippets, in the order of his or her choosing, rather than according to a pre-established structure fixed by the author.
So nearly 50 years ago Ted Nelson envisioned something a lot like what we know as the World Wide Web. On his own site (which is one of the uglier sites on the Web, but that’s not my point) he says,
In 1960 I had a vision of a world-wide system of electronic publishing, anarchic and populist, where anyone could publish anything and anyone could read it. (So far, sounds like the web.)
But what we’ve ended up with is a disappointment to him:
But my approach is about literary depth– including side-by-side intercomparison, annotation, and a unique copyright proposal. I now call this “deep electronic literature” instead of “hypertext,” since people now think hypertext means the web.
In a letter to the editor of New Scientist, 22 July 2006, Ted Nelson wrote:
I coined, you say, the word hypertext in 1963 “while working on ways to make computers more accessible at Brown University in Providence, Rhode Island” (17 June, p 60). But in 1963 I was a dolphin photographer in Miami, nowhere near Brown.
I had become inflamed with ideas and designs for non-sequential literature and media in 1960, but no one would back them, then or now. Not until the late sixties did I spend months at Brown, with no official position and at considerable personal expense, to help them build a hypertext system.
That project dumbed down hypertext to one-way, embedded, non-overlapping links. Its broken and deficient model of hypertext became by turns the structure of the NoteCards and HyperCard programs, the World Wide Web, and XML.
At the time I thought of that structure as an interim model, forgetting the old slogan “nothing endures like the temporary”. XML is only the latest, most publicised, and in my view most wrongful system that fits this description. It is opaque to the laypersons who deserve deep command of electronic literature and media. It gratuitously imposes hierarchy and sequence wherever it can, and is very poor at representing overlap, parallel cross-connection, and other vital non-hierarchical media structures that some people do not wish to recognise.
I believe humanity went down the wrong path because of that project at Brown. I greatly regret my part in it, and that I did not fight for deeper constructs. These would facilitate an entire form of literature where links do not break as versions change; where documents may be closely compared side by side and closely annotated; showing the origins of every quotation; and with a copyright system for frictionless, non-negotiated quotation of any amount at any time.
This amazes me. All along I’ve been thinking XML is marvelous. But when Ted Nelson says, “I believe humanity went down the wrong path because of that project at Brown. I greatly regret my part in it…” I have to take notice. And that the World Wide Web is based on a “broken and deficient model of hypertext”, and XML is a “wrongful system.” Wow. Our lives have been momentously changed in the last 15 years by an information system of enormous scope and complexity that most ordinary folks like myself never saw coming — and Ted Nelson says we could have had something even better if he’d just stuck to his guns about how a single academic project got built back in the late 60s?
"And he overseeing it, riding peacefully about on his horse while he learned the language (that meager and fragile thread, Grandfather said, by which the little surface corners and edges of men's secret and solitary lives may be joined for an instant now and then before sinking back into the darkness where the spirit cried for the first time and was not heard and will cry for the last time and will not be heard then either)..."