October 2023

Design is faith first, then fact

I’ve had 20 years practice at Design and almost every time I embark on a new project there’s a little voice inside my head that says, “What if we come out of this $10k research project with nothing? What if my toolkit doesn’t work this time?”

That voice is still there despite 20 years of hard evidence that, every time, that investment in research is money well-spent. In fact, every time I finish a project I walk away thinking, “Gee, if I was that client, I’d pay double for that level of insight, de-risk, innovation, etc” – all the things that practitioners know design research is good for.

And so, if even I still have that little voice in my head, with 20 years of direct experience of Design’s value, how can I expect anything but scepticism from a client or executive who’s never seen what design methods can do.

Design as religion

It strikes me that Design is very much like religion. Design, like religion, requires faith – a belief that, if we follow this set of rituals and practices, we will achieve insight salvation; a miraculous understanding of our user/s that will unlock more social and/or financial benefit for the organisation and those for whom it serves.

And, like religion, there are sceptics – design atheists and non-believers. Sometimes, they believe in Engineering. Sometimes, they believe in Product Management. Sometimes, they believe in six sigma management or ‘servant leadership’ which comes in many flavours. They say things like, “I don’t need the fancy colours and clothing of your faith, functional is fine. We’ll just ship faster and learn in real life.” Or, they say, “I already read the sacred texts of your Design religion and we don’t need to hire one of the clergy to perform the sacrament for us. I got this.”

And you know what I’ve learned? Sometimes, they don’t need Design. Sometimes, they have a market or a business that has no competition, or who’s existing service landscape are akin to the holy fires of hell that doing anything slightly better than it was before is good enough for now. You don’t always need the pope to exorcise a devil. Sometimes a bit of garlic around your neck will do just fine.

For years, the devout Christian members of my family have told me that I don’t know what I’m missing by not cultivating a personal relationship with God. And for years I reply, “I’m doing just fine, thanks.” And, I truly believe I am.

Just like we do in Design, religious institutions try to use testimony to show us what life could be like on the other side – if only we were more faithful. So and so was cured of cancer. God gave them a new baby when they asked for one. God protected them on their overseas adventure. Their prayers were answered. Loaves and fishes. Water and wine. The list goes on.

In Design, we write case studies to show the unconverted what they’re missing out on; “Company X unlocked 10x revenue with this one simple design change.” or “Investing in user research gave Company Y a whole new product category.” or “User efficiency went up 10m% because we understood their needs.”

Loaves and Fishes. Water and Wine. Designers perform miracles. Jesus does too.

Design as fact

Those of us in the clergy of this Design religion (lets call ourselves “Professional Designers”) believe we’re offering something special; a version of enlightenment. Those outside the congregation question what the clergy actually does with the pennies they collect from the donation box. The clergy struggle to grow the church and we spend most of our time online talking about how the outside world could be saved, if only they came to church once in a while, or read this sacred text, or repented their sins.

The thing is, no matter what we think of the non-believers, there are some true believers who aren’t one of the clergy – they are non-designers who believe in Design. They are few in number but they are there. I know some! The question is, how did they become that way? What’s their conversion story? Might that be helpful to the Design clergy in evangelising the benefits of Design?

Can I get a witness?

I’ve worked with a lot of non-believers over my time. People who, for one reason or another, thought Design had nothing to offer. But, they read some HBR article at some point and thought, “maybe I’ll give it a try to see what all the fuss is about.” Or, they were working in a highly regulated space and needed the Design process – speaking with users and a report – for some version of due diligence and risk mitigation. There have been, on very few occasions, the ones who need ‘saving’, too; those who fell so far from the grace of God by shipping waste-of-money software and, through the Design process, sought repentance (mostly from their boss/manager).

And you know what? As I’m one of the clergy standing out on the street handing out fliers, I’ll take any non-believer I can get.

What I’ve learned over all these years is that someone needs to ‘experience’ the miracle of Design before they become a devoted follower. You can tell people as many testimonies as you like, but until you feel it – i.e. you go from thinking one thing to thinking another about your audience, or your entire body relaxes as you realise that you’ve de-risked whatever decision you were going to make anyway, or you unlock a market bigger than you ever thought possible – you can’t believe in it. You don’t want to go to Church again if you don’t feel different from it each time.

Give me a shepherd I want to follow

When I was a kid, I was dragged to church weekly by my Mum. It was a really boring Catholic church that simply ‘ticked the boxes’. Read the book. Ding the bell. Recite the prayer. Put coins in the donation. Eat the bread. Ding the bell again. Shake the priests hand on your way out.

I’ve been in design organisations that take a similar approach to their Design work – tick off the steps of the double-diamond (research, report, invent, test, refine etc) and arrive at the end. There is no internal transformation that occurs in both of these scenarios. As a non-believer you’re left thinking, “That was a waste of time. I could’ve done something else with my Sunday morning.”

Why are we surprised if people don’t engage with Design if the clergy makes them yawn or isn’t invested in giving them the transformative experience we know is possible?

One time, I went to my uncle’s pentecostal service. In contrast to the catholic service, this was outrageously lively. There were people speaking in tongues, dancing in the aisles, falling over after being ‘touched by the lord’ and re-born into a new life. Unlike the catholic gospel reading, in the pentecostal equivalent, the pastor was critically analysing the bible, putting it into historical context, drawing lessons from the text, and applying it to the lives we live today. As a 9-year old, I was blown away. I remember thinking, “Woah, if this is Church, maybe I could be into it.”

There are some design teams that do this, too. Teams where the design process is participatory at all levels. The clergy draw on a vast toolkit of design methods and apply them to the problem they’re trying to solve in very targeted ways. They don’t ‘always do focus groups’ or ‘speak to 8+ people’. They don’t always ‘do research’, or write reports, or do ‘divergent thinking followed by convergent thinking’. They are, quite simply, goal-oriented and focussed on giving their client the miracle of Design in whatever format suits the problem they’re trying to solve. It’s those clients that go away changed.

Building a Design Congregration

Building the Design Congregation, as it turns out, works much like evangelical religion. People need people-like-them to have experienced the miracle of Design and ‘spread the word’ that Design Is Good. That it does not judge. That is accepts all who are willing to listen, engage, and approach it with a curious and open mind. That Design is here to save them, and humanity, from the evils of the world like waste. Planting an equivalent of the Gideon’s bible in every corporate office (Design Thinking by those IDEO guys?) ain’t gonna do it. It comes back to designing the the design experience for non-believers.

Those who have experienced the power Design through an internal transformation of their own worldview know that it’s not just faith forevermore, but fact. It’s not mystic creative genius by the anointed few, it’s logic and deduction. Where Design is *unlike* religion is that Design is empirical, evidence-based, collaborative, and iterative. The Designer is not The Pope, but a shepherd. There may need to be some faith to walk into the church for the first time, but once you walk out, you walk out changed. And then you tell others that, maybe next time, they could go – just once – to see what all the fuss is about.

Once I was blind, now I see, or something like that.

May 2023

People not proxies

One of the core tenants of human-centred design is, well, the human-centred bit. What this is supposed to mean is that we – professional designers (and their teams) –  create products, services, and interventions that consider the genuine needs and wants of the humans whose lives we’re trying to improve. And then actually improve them.

Bound by the rules of capitalism, designers, on the most part, have been engaged in a decades-long advocacy struggle with those who want more efficient and profitable businesses to, please, consider the people they’re making products and services for.

Over those decades, designers have attempted to develop tools and frameworks to help make their case more concrete, or present it in a language that the decision makers of businesses (whose primary concern is the shareholder), understand. Driven by the (perhaps fallacious) idea that what’s good for the user is good for the business, designers start at one end of the spectrum – absolute advocacy for the interests of those who use the products and services that are being provided by the business. The business owners start at the other – the most efficient way to maximise profit, reduce cost for shareholders, and deliver on what is often a pre-defined strategy.

Most designers know that the best way forward through any conflict is through compromise and so, over time, that’s what we’ve done.

Let’s for a moment, imagine a HCD Utopia

When I hear my peers describes their ‘teams’, I hear stories or various articulations of ‘the three-legged stool’. i.e. they largely consists of three primary roles – Designer, Product Manager, Engineer/s. This is currently ‘normal’ in technology-focussed teams. These technology-led cultures mostly consider this ‘multi-disciplinary’ and, most recently, our model of this is ‘maturing’ to include other business roles like Sales, Marketing, Subject Matter Experts and so on.

But, I don’t know any team or business today that employs the people for whom they’re designing; yeah, that’s right, employing the user. Not just employs them, but makes them equal and integrated members of that team. Even writing that sentence feels extremely radical. And, by employ, I don’t mean a token $60 for filling out a survey. I mean a long-term, project-length commitment to contributing to the design process – to literally be the human/s in the centre of the design process. Designers, Product Managers, and Engineers as peers to those whose lives we are trying to affect. What would that world be like?

What if users were working and being paid alongside us, as equals?

Guided by principles like, “No solution for us, without us”, the idea of true HCD is co-design (aka participatory design): absolute user integration into the design process. Not that designers and other technology professionals are somehow more superior or powerful in the process. What I’m talking about is true equality. It’s difficult for us to imagine this utopia because it so rarely happens.

In a perfect world, having the communities for whom we are designing be an integral, long-term, and consistent part of the design process is what human-centred design needs and, at the moment, very rarely gets.

What’s the next best thing to co-design?

Employing the humans for whom we are designing is nowhere near the Overton window right now. So, as good designers do, we’ve decided to take a compromising step toward the middle – a lean (and therefore cost-effective) approach to involving users in the design process. After all, employing the people we design for is comparatively expensive to asking them to contribute on an ad-hoc basis. We, as a profession, are currently satisfied with the ad-hoc basis of user input.

Getting regular and direct access to users in this lightweight way is still seen as ‘expensive’, ‘time-consuming’, and ‘difficult’.

Here’s where we currently are: The three-legged stool (all of which earn upwards of 150k/year each), give ‘incentives’ to users to participate in the design process in a lightweight way. Research methods like focus groups, surveys and so on, mostly at the beginning of the design process, is the ‘normal’ way. Teams offer things like $50 – $100 cash or vouchers to the people who’s information and context are critical to the success of the solution (and therefore revenue of the business). Even this model, as basic as it is, still has it’s difficulties. Some businesses have made this approach part of their day-to-day – they have dedicated budget for incentives and process like ResearchOps. But, a lot of the time, getting regular and direct access to users in this lightweight way is still seen as ‘expensive’, ‘time-consuming’, and ‘difficult’.

And so we’re currently in a situation where two things are true:

  1. It’s still largely considered ‘expensive’ (both in time and money) to get ad-hoc input from the people and communities in which we’re paid to intervene and, hopefully, improve for shared value.
  2. Driven by ideas like Design Thinking, most organisations have become comfortable with non-iterative, linear processes of “design & release” product development.

By combining these two things, we’re left with a process that masquerades as human-centered design but is so far removed from the principles of good quality co-design such that it becomes, in the most literal sense, unjust and potentially harmful to those communities that we believe we’re trying to help.

And then, there are personas

Having said all of that, I believe that most designers (and product teams), at their core, want to genuinely make things better in the world. And so, in a valiant effort to be more human-centred but still deliver increased profit and reduced cost to businesses, we’ve continued to try to find a better middle ground: How might we ensure that the team making decisions in their high-rise meeting rooms of the company offices (or homes), don’t lose sight of the human impact of the decisions they’re making; all without spending more money and time than we already had approved at the start of the financial year. Enter the persona.

At their core, personas are averages. (I’ve always hated them). They were the design community’s attempt to help non-designers in the team and business empathise with the people whose lives they were attempting to change without going to the cost of paying individuals, regularly, for their input.

We’ve created a habit of engaging the people we’re designing for mostly at the ‘start’ of the design process. We seek to understand their behaviours, needs, motivations and context through various research methods. The law of diminishing returns state we probably need 6-8 people. At $50 a pop, that’s about $300. Business budgets can typically swallow that. It’s the first part of the double-diamond, right? That’s easy to sell to the executive. A required step in a linear process. Approved.

And then, with that very limited set of information, we abstract our findings enough to create averages. We give them kitchy and further abstract labels like, “The dreamer” or “The planner”. We use age ranges or other general characteristics (derived from Marketing) like, 30-60 year old mother of two. We make nice little posters and present them to the business and say, “Here, here’s what your $300 got you. Valuable, right? Can we have more budget next time?”

And, in the moment, it feels good. The outlay has been minimal and we feel like we understand the people we’re about to design for. Then, with no additional input, we typically create solutions in our offices, in isolation from those people who gave us the critical information about their lives. We may test them but, increasingly, our product delivery culture has become one of shipping first and ‘testing in real life’ – move fast and break things, right? Well, that works if there’s time to iterate in real life, too, but that’s almost never the case. And, when you can ship to 2 billion people overnight, it’s outright dangerous.

But, it’s better than no user input at all, right?

Ah. Well. No. And here’s why:

  1. By designing for averages, we create average design – solutions that don’t really solve anyone’s problem and, because of this, often create more problems.
  2. Our product design culture is typically not one of build, measure, learn. It’s one of build, build, build. Businesses still think they require certainty – a top-down plan that can be communicated and set expectations with a board – so roadmaps are typically drawn months in advance, with ‘features’ already prescribed, and very little flex built in for teams to release something, learn something, and adapt (which was what agile was supposed to be for, by the way). It’s antithetical to the complex adaptive system that is the human/technology relationship.
  3. Persona documents are very rarely (if ever) updated. They look and feel complete and factual. If the designer can abstract the groups enough they feel as though they cover ‘all of our target market’. New research often happens at a feature-level once the initial research is complete but that’s very rarely captured company-wide and shared across all design teams so we all end up working off bad, abstract, and old information.
  4. Most worryingly, it’s changing our practitioners’ definition of what good human-centred design looks like – it’s now OK, in fact, ‘progressive’ to work in this way. New designers are watching experienced designers work in this way and calibrating their levels of what good HCD looks like. Research upfront, ideate, plan, then build, build, build until the end of the financial year and the budget’s gone. That’s not OK.

Complex Adaptive Systems

The thing with designing tools and services for humans is that the relationship between humans and their environment is a complex adaptive system. First we design the tools, then the tools design us. This means that the lean, linear, lightweight processes that currently characterises ‘progressive’ HCD in most larger organisations intervene in systems in ways that no human, no matter how much planning and research we do, can predict. We learn about the human/technology relationship only through interacting with it. It’s a hallmark of complex adaptive systems. It’s ecological, not engineering, and the way we design and intervene in people’s lives is not compatible with this.

What we need from design advocacy isn’t another presentation on the double-diamond methodology or another version of Design Thinking that further advocates for linear ways of thinking about design; we need to recognise and remind one another that the decisions we make about the ways we intervene in people’s lives have intended and unintended consequences every time because that’s the nature of complex adaptive systems. We need to remind each other that it isn’t the behaviour of “The Dreamer” that we’re trying to change – it’s quite literally Bob, Janet, Carlos, Mohammed’s lives we’re changing. It’s my parents. Your kids. Our species, and others we share the planet with. It’s either making a more equal and just world, or it’s doing the opposite.

We need to remind each other that the lives we’re intervening in are my parents. Your kids. Our species, and others we share the planet with. We’re either making a more equal and just world, or we’re doing the opposite.

When we’re dealing with complex adaptive systems, there is no solution, just better or worse. We need to get better at asking ourselves not “Will my boss approve my budget for research?” but “Who might this help, and in the same fell swoop, who might this harm.”

Where does Design go from here?

I don’t think there are definitive answers to this problem we face, to think that there would be is to ignore the same problem I’m describing; there are no solutions at all, only interventions. But, I’m finding that I’m running out of ways to tell some people why they should care about other people. I’m finding myself looking for people who already understand what I’ve been describing: We find each other, we have a great time, and the rest of the world can go to hell. That’s not good.

Maybe the neoliberal power structures that support capitalism will make this impossible at scale. Maybe all I can hope for are small wins. Maybe I can write and change the mind of the next generation of designers who can continue trying to explain to some people why they should care about other people. Maybe it’s about bringing the users for which we design into the team, as long-term, equally paid equals. Nothing else has really worked, has it? Maybe it’s time to try something different.

March 2023

When did software lose its softness?

Of all the things software UI could’ve been good for, it was personalisation. With no ‘hardware’ interface, the ability to customise and adapt any user interface to anyone’s needs is software UI’s competitive advantage. But that doesn’t seem to be how it’s working out.

Almost 20 years ago, CSS Zen Garden sprung up as a way to show the power of CSS to those new to the web. You could take exactly the same content, and, completely separately, make as many ‘front-ends’ as you wanted. I don’t mean tweaking a button colour here and there like we tend to get caught up on these days. No, this was absolute and complete wholesale UI change.

A modernist image of a HTML page from CSS Zen GardenA modernist image of a HTML page from CSS Zen Garden
Two websites using exactly the same HTML can look completely different via CSS Zen Garden

Back then, the internet was relatively new, albeit gaining fast momentum, and so us designers were more fixated on making the stuff ‘look acceptable’ – a focus mostly on aesthetics as we battled with early-days HTML table layouts, 1x1px gifs in a world before rounded corners, caveats around which size of monitor and which browser this website would be best viewed at, and ‘sliced’ images if we wanted things to be a little more ‘responsive’. It really was a moment where graphic designers adapted offline content like brochures to the online alternative – brochure websites.

Along came some ‘fancy’ technologies like Flash and, for a brief moment in time, there was a sense of playfulness to people and businesses who had presences online. Things like ‘splash pages’ were the designer’s equivalent to opening movie titles – How might one set the mood of this website with a focus on form before getting to the heart of the content.

As the usefulness of the internet evolved, so too did our focus on and emergence of concepts like ‘usability’ and ‘interaction design’. We did this the best way we knew how – skeuomorphic design – or, taking real-world affordances for things like buttons and tabs, and mimicking them on websites and other digital services so that people understood what to do when they saw a particular interface element. Meanwhile, consultancies like Norman Nielsen were devising principles and practices for ‘best practice UI design’ which, whether you like it or not, still hold true today.

Software and the user interfaces through which we interact are our raw materials and by its very nature, is “soft” – abstract, malleable, fluid, adaptable.

Back then, when the world was a little less connected, most businesses expected people like them to visit. I designed websites for ice cream shops, car dealerships, and other small to medium-sized businesses that were mostly interested in giving people in their local area a way to contact them, and then, a bit later, make a purchase online.

But like a mycelium network, the internet exploded, amplified by the advances in mobile computing. Relatively quickly, humans could see, hear, and feel other humans – everywhere.

One thing I never really see the internet get credit for, at least at a meta-level, is how it accelerated humans’ understanding of one another. Some may call it ‘woke-ism’ but, in truth, the internet exposed, and continues to expose, the diversity of experience that exists across the human race – race, gender, disability, neurodivergence – you name it. I feel like we’ve just scratched the surface, just as the spectrum of light refracted into its various component parts. There are differences in the human race we haven’t even noticed yet.

And, if we’re honest with ourselves, it’s fair to say we’re struggling with that. We’re struggling to ‘catch up’ to this explosion of awareness by way of categorising an ever-nuanced set of human traits in order for us to have a common language for which to discuss our individuality and design a world that’s fair and just for all. Because of the inherent complexity in the infinite diversity of human experience, our brains seem to get lazy so, as a way out, it becomes easier to stick with broad generalisations and proxies for certain values – us vs them, woke vs not woke, informed vs uninformed, left v right. It’s not our fault, it’s just a reaction to the explosive power of the internet and the tools we’ve built to connect ourselves to one another. We are but animals in a rapidly changing habitat.

What’s this got to do with UI?

Well, there’s an opportunity – not necessarily a business one, although some folks will argue it is – for accelerating equality across all humans, and it lives in software.

Let me put it this way. The world is reliant on digital tools and services more than ever, and unless there’s an Independence Day (the movie) level societal collapse, that reliance is not getting any weaker. At the same time, with every day that passes, we are increasing our knowledge and understanding of the ever-growing diversity of the human experience.

See the connection?

There has never been a greater opportunity in human history to create tools and services for anyone. Meanwhile, software (and the user interfaces through which we interact) is our raw material and by it’s very nature, is soft – abstract, malleable, fluid, adaptable.

A modernist image of a HTML page from CSS Zen GardenA modernist image of a HTML page from CSS Zen Garden
Two more websites using exactly the same HTML via CSS Zen Garden

Why then, if we had the technology back in the late 90s to create completely different interfaces with exactly the same content, do we currently inhabit a digital world where software UI has become as stiff as hardware UI once was?

Sure, OK, I’m not an idiot, I know it’s cost. It costs businesses to build UI and the last thing any business wants is to spend money making ‘multiple versions’ of the same UI – designing, deploying, managing, supporting – especially if ‘target audiences’ are small because the ROI on that investment is likely to be small. For what it’s worth, this argument is the common one used for addressing ‘inclusion’ (or accessibility) if you prefer that term – but my answer is the same – fine, so who’s fixing it and what happens if we don’t?

The digital world talks about personalisation in the context of selling more stuff. It talks about AI in the context of ‘accelerating the commodification of everything’. It talks about ‘inclusion’ and ‘diversity’ in executive round tables where things get so complicated and nuanced that the easiest thing to do is bury heads in the sand or make ‘decisions’ which are, often, not always, empty promises for reform and change. But, as almost anyone in digital (and ecology) also knows – personalisation and diversity are accelerators of all sorts of success. It seems, we’ve decided that it’s just not needed in our interfaces?

Look, I used to code. I used to be able to write HTML, CSS, Javascript, C++, SQL blah blah. And sure, over time, our systems have become more complicated and things have progressed such that the focus has been on scale – the most for the many. React, NextJS, etc and so on promises speed, security, and scale but the problem of personalisation doesn’t fall on the engineer, designer, and product manager working to ship features because ‘that’s a business model problem’. I guess I’m here to ask the obvious question – what if it wasn’t?

What if, instead of prioritising speed and scale in our engineering frameworks, we built humanity in – a way for anyone to customise the way they interact with the services and tools we make?

The overwhelming joy and possibility of my first experience with CSS Zen Garden unleashed in me my love of the digital medium. For the first time, a designer could divorce form from content. We were no longer trapped in A4 document boundaries or DL brochures. We were no longer bound by the 3m x 2m shopfront window – that sign, or brochure, or document could be for anyone. Back then, I’m ashamed to say that I was also completely unaware of the diversity in the audience – their preferences and abilities – but now that I am, I can’t help but wonder about the power that lies within a modern-day CSS Zen Garden approach to building front-ends; not just for the ‘functional’ stuff like ‘accessibility’ and ‘inclusion’ but for the emotional stuff, too.

Maybe I’d buy more if my Etsy experience could look the way I wanted it to look, rather than what’s easiest for the team to maintain internally. Maybe I’d prefer doing my taxes online if I could organise the interface the way I wanted to, rather than what an accountant thought I should do. Maybe I’d have more fun on Instagram (which would lead to higher engagement) if I could change the colours, fonts, and customise the ways I ‘scrolled’ through content because it was just the way I preferred it? What if I could choose and own the interfaces between me and said company/product or service?

What are we doing – the shapers of the building blocks of our digital experiences – to create a world that marries the best of what “Soft UI” can bring (adaptability, changeability, customisability) with the ever-increasing diversity of human experience that we know exists? And, should we, could we, be doing more?

November 2022

Don’t Fence Me In 

This is a loose transcript of a talk given at 99Designs in Melbourne Australia on Thu 3 November. Throughout history, designers have helped humanity break limits. We’re living longer and in more places across the planet than ever before, and with bodies that are decidedly super-human compared to our ancestors. But what happens when the world we thought was limitless starts to push back? When the systems we thought we controlled and dominated begin to show us that we aren’t ruler of all, but dependant on all? Can the very process of Design – the one that got us here – get us out of it?
 
 

When I decided to study Design, I thought what I was doing was finding a way to ‘express my creativity’ – a way to use my heightened visual sensitivity to make the world a more beautiful place. But now, 20 years later, I find myself as a founder of a company whose intention is to make the planet a more habitable place for humans once I’m gone. That’s not where I thought Design could end up.

An image of a cartoon cow doing a poo with the words I never thought I'd care so much about cow shit underneath
Slide 1: How might a designer ‘get a seat at the table’?

This morning, I spent 3 hours discussing cow shit. Understanding how it works, why it’s bad (and good), and how we might help the world understand it better, too. There were a lot of technical words but, in the end, we’re talking about shit.

In my view, helping to restore the natural environment so that we’ve got some chance of eeking out a higher quality human existence for longer than if we didn’t is one of the world’s most pressing problems. And, if I can find myself – a designer – in that conversation, I believe other designers can and should be there, too. So that’s what I’m here to talk about. How I think I did it and how others might do it also.

A note about privilege

Before I get started, I want to call out that I’m aware that this point of view comes from a place of privilege. I’m tall, white, male, heterosexual, have English as my native language, I was lucked into a non-abusive, very loving family – the deck is stacked in my favour. Not everyone has it this easy so, maybe it’s not possible that we can all do what I did. But, I think it’s still worth sharing this in the hope that you’ll find your own nuggets of wisdom here, and re-mix these ideas so that they work for you. This isn’t about me ‘telling you what to do’. It’s about me sharing a story so that you can process it and work out the bits you agree with, disagree with, or, like how most diversity works, just make it better.

A cartoon character of a man with a top hat and a monocle
Slide 2: All the ways I’m privileged. The cards have been stacked on my side since I was born.

A bit about where we’re at and why it’s urgent now

Throughout history, the process of Design has been used for much more than ‘making the final thing look nice.’ Even as far back as something like 1974, I can’t remember the exact year but it’s in Victor Papanek’s book, Design for The Real World, people have been thinking about what Design is for, and what it does.

A triangle with each side labelled Habitat, Mortality, Biology with arrows pushing outwards
Slide 3: The ways in which Design has helped humans flourish

At the risk of using a very broad brush, Design has been used for pushing boundaries and helping humans overcome limits; those limits that come inherent with the body in which we live.

Design is helping us live longer lives in bodies that are, by any measure, super human, compared to our ancestors. We’ve also colonised environments where, without Design, human bodies cannot live – whether that’s icy cold places or roasting hot places (ski slopes in desert for goodness sakes!). And, because we’re running out of room to put people comfortably here on Earth, design is also helping us look beyond the planet. I have some strong opinions about that expansion but that’s a whole other talk.

So, whilst the methods of Design have helped with all of this ‘expansion’ of human limits, we’ve required one core thing to underpin all of it. That thing is energy.

A triangle with each side labelled Habitat, Mortality, Biology with arrows pushing outwards with a bone burning in the middle implying it's pushing the limits further
Slide 4: We’ve burned bones to stretch the limits of our existence even further.

For many years, we’ve been burning the bones we found in the ground to fuel our expansion. We’ve burned bones to extend our lives, our health, and our habitat to live relatively energy-rich lifestyles. Of course, that richness isn’t equally distributed around the globe (and therein lies one of our fundamental problems) but more on that later.

The thing is, we’re running out of bones and we’ve learned that the whole bone-burning thing has had some side effects. Let’s simplify those side effects and call it ‘climate change’. It’s great that we’ve realised this effect, and it’s great we’re doing something about it. We’ve realised that bones are finite and that, in comparison, sun and wind and other ‘renewable’ energy sources are essentially limitless. So, we’re pivoting. And, if we’ve got an infinite energy source, we can continue to break those limits and keep expanding now, right? With a source like the sun, we’re saved.

But not so fast.

See, I thought this too. But then I read a book called An Inconvenient Apocalypse, and it changed a few things in my brain. Whilst the energy sources might seem infinite, humans are actually in the business of energy conversion and that’s where, like bones, we’re going to hit limits. We can’t use the sun without something in between (unless we’re tanning) and we can’t use wind unless we’re happy with it blowing us to places that we don’t necessarily want to go.

A diagram showing the sun and wind energy, the ways we convert it (solar panels and wind turbines) and the materials we use to facilitate conversion
Slide 5: Whilst energy sources may be renewable, how we convert that energy is not

To convert solar energy we’re currently using solar panels. To store that energy captured by solar panels we’re using batteries. We need both things – to capture it and to store it.

But, to make a solar panel (or the number of solar panels we need to continue on our current expansion trajectory) we need raw materials – Silicon and Silver are the primary raw elements used to make the panels, and loads of lithium to store the energy in batteries.

To convert wind energy into something we can use, we also need raw materials. A single wind turbine is currently manufactured using a combination of iron, plastic, aluminium, copper and a bunch of other things. The manufacturing process itself needs energy, too (like, where plastic comes from) and so we start to see a vicious cycle emerging.

Now, unless I don’t understand what Mr Zanzyk taught me about physics and chemistry in high school, there are only so many raw materials in the world (118 in the periodic table as of 2022). We’ve used a chunk of them already for, what is largely kind of useless stuff. In fact, I read something the other day that said we’re about the enter the age of ‘above-ground mining’ because there are more raw materials sitting in landfill now that there is underground. You could say that this is all rather wasteful. That physical stuff is problematic and if we just stopped buying consumer goods or making things, maybe digitised a bunch of stuff in the metaverse, we’ll be better off. Right?

But, technology won’t save us either because digital runs on raw materials, too.

A diagram showing a person holding a phone, servers in the middle, and a trash can on the right
Slide 6: Digital technology requires raw materials to function, and it’s possible to say we’re not using those materials very wisely

A simple example of a current, global phenomenom comes from the very field we’re practising within – digital technology. We (about 5 billion of us) are currently producing infinite amounts of video. Video, as far as data goes, is expensive. Exponentially more storage space is required to store this video than say text or sound or static images.

Where is all this video being stored? Well, data centres of course. Data centres full of servers. How are servers made? Well, they’re manufactured. From what? You guessed it, raw materials. Here we are again. Back to raw materials. In fact, a single server uses a combination of 50 of 90 of the world’s raw materials. Why do we need all this video in the first place? Well, social media companies have worked out that the more data they collect of our lives, the higher resolution picture they’ll get of us about who we are and what we like. This means they can sell higher value ad space to brands who want to ‘connect’ with us so that people who make junk can get better at convincing us we need it. This cycle just uses up the very raw materials we’re going to need to extend life on Earth even quicker. It’s a toxic cycle. It’s f**king terrifying.

I’ll just pause a bit to let that sink in.

Why are designers stuck on surface level problems?

So, things look a bit shit. Maybe they feel a bit hopeless. But, the thing about human existence is that it’s a complex-adaptive system (see Dave Snowden’s work on this re: types of problems that exist in the world). That means that things aren’t causal, but, rather, they are dispositional. Our actions aren’t part of an equation where if I do X, Y will definitely happen (we see this in Design all the time). It’s more that if I do X, under these conditions, Y is more pre-disposed to happen (but not guaranteed). And so, in my mind, this is where hope lives – that what we’ve got isn’t a system written in stone. We can turn things around. And this is the space in which designers can contribute.

A scuba diver swimming past the surface and going deeper into the ocean
Slide 7: We are wading through surface level problems, often looking enviously to the bigger, chunkier more meaningful problems (without a map to get there)

The problem is, when I began my design career, I didn’t know what Design was. I thought my job was to create optimised buttons, cart checkout experiences, login flows and password resets to make profitable companies more profitable. And I did that, for a while. And, it was fun, to see something you did turn into a number of a metric that made my boss happy. But, I’ve always had this burning urge that the way I was thinking could be used for more. And the world has some big f**king problems to solve – climate change, aged care, AI ethics, social justice, wealth inequality. They. Are. Huge. I just didn’t know how I could get there. It seemed so… deep.

decorative only
Slide 8: Strap on your scuba gear, we’re going deep

But you know what, I had never seen (and still very rarely see) a Designer working on these problems, or talking about them if they are. Why is that? What is it about being a ‘designer’ that prevents us from working on bigger, gnarlier problems? Why aren’t we invited in to the conversation? And, does that mean what I’m doing now is just a glitch in the matrix? I’ve had a fortunate career on being able to work on this stuff and I hear loads of designers wanting to do the same but they’re having trouble ‘breaking in.’

“Hey, I’m the new designer”

Here’s how I’ve seen designers introduce themselves and talk about their craft:

A designer enters a new team, maybe meets their Product Manager counterpart first and they say, “Hi, I’m the new designer, let’s get to work.”

A cartoon of a designer saying to a PM - I'm the new designer, and the PM thinking about all the designers she's ever met
Slide 9: Introducing ourselves to a new team member seems easy, right?

What happens in this moment is that the PM hears the word “Designer” and thinks about all the designers they’ve ever worked with. If the new designer gets lucky, that PM has worked with some designers who’ve done more than graphic design, but more often than not, it’s not the case. What happens is that PM is building a picture in their mind of what you do. They’re making calls and setting up boundaries in their brain about when to get you involved in a conversation, and how to work with you. And that’s just the PM.

A diagram showing that, at a team level, 4 people can have 12 different versions of what a designer is in their heads
Slide 10: Introducing ourselves to a new team adds huge amounts on complexity and less certainty that anyone knows what we do at all

Now, if we escalate that introduction to others who you may encounter in your day-to-day collaborations, you may meet:

  1. A developer who has only ever worked with one other designer and that designer was a pixel-perfect control freak that didn’t know how to collaborate and made the engineer feel like shit. You don’t have a friend there, right off the bat. They are ‘anti-Design’.
  2. A client or boss who has a niece who’s ‘studying Design’ and they made some flyers or an invitation for their yacht club gathering last Saturday.
  3. A PM who, maybe in a previous life, worked with a designer who was a research-bias? Or a visual bias?
  4. Maybe another dev who worked with a few designers who knew nothing about HTML/CSS so they found it difficult to communicate and therefore you’re that person now.

Designers have a serious identity and branding problem

What we’re left with is a total mess. Every person you meet has a different idea of the role you play in helping the team make decisions, develop solution, test them, and iterate towards something that’s better for the world. What’s the default in this situation? They make a connection with the 1950’s version of commercial artist and you’re left putting lipstick on a pig, right at the end of the pipeline where it’s too late to have any strategic influence. I’ve written about this history in depth on my blog if you’re keen to go deeper.

No one likes waste
Slide 11: Designers have an identity and branding problem

After my first few roles like this, I got pissed off. Introducing myself a designer was not working, no one knew what I was or what I could do. So, I stopped introducing myself as a designer. What I replaced it with was talking about what I do, not what I am. In its simplest form, that’s all about Minimising Waste.

No one likes waste
Slide 11: I love waste, said no one. Ever.

A designer’s skill set, at it’s most very basic, isn’t about solving problems, it’s about minimising waste.

No one likes waste. We don’t like to waste time, energy, or money (especially business). So, no matter whether you’re talking to an exec, PM, developer or someone else – waste is the common denominator. But, the problem with keeping it simple like that is that it’s also a bit abstract (just like being a ‘problem solver’… because, you know, PMs or Engineers etc don’t solve problems either, right? For what it’s worth, please don’t say to people you don’t know that you’re a problem solver.) And so, what I’ve learned is that it pays to be specific. Keeping things short may feel efficient, but they just leave more unsaid.

Some word clouds that explain different ways to talk about Design activities (mentioned below)
Slide 12: Shorter labels don’t always make for the best communication

The word “Research” sounds expensive and evokes lab coats, and rigour, and lots of time. So, I changed it up: “I can help you gather evidence to make a more informed decision about what to do next.” Suddenly, that re-brands ‘expensive research’ into something that could mean a 5-minute check-in with a few folks in a coffee shop to make sure we’re not building something stupid. Or, a quick skim of analytics to see if we anyone is using things on desktop anymore.

Here’s another one: “I have the skills to invent and describe how people will interact with what we make.” Most people call this “UX” which leads an collaborator to ask “where’s the wireframe”. By using words like “interaction design” and explanations of it like the one above, it frees you to use whatever is in your toolbox to do your job. I haven’t created a digital wireframe in 10 years, but I’ve mostly spent my time in Interaction Design. It’s been pen, paper, workshops, whiteboards, doing just enough work to ensure the team knows what to build and why. It works.

And now let’s talk about the easy one – Visual Design or UI. People know what that is, right? Well, the problem we’ve got is that it’s more complicated than that. What we’re really doing is “Using colour, layout, typography to help us achieve functional goals or heighten an emotional response.” You could call it ‘Visual Design’ or “UI” but that doesn’t describe what is essentially Visual Engineering. I say visual engineering because any graphic/UI designer knows that it’s the relationships between the elements on the screen or page that matters. For what it’s worth, engineers get away with this all the time:

“Oh, well, if we make the CSS change, that’s going to break this other thing and there’s some refactoring time to do that so it turns out your ‘simple request’ is really expensive and we probably shouldn’t do it.” – Most developers I’ve worked with. PMs and Designers choose not to understand code and so, guess, what, developer wins.

But, it’s the same with Design. We get feedback that is solution-oriented, “Let’s change the button to purple and move on” but we know that when that button goes purple, it’s going to draw too much or little attention to this other thing on the page so we need to adjust the line-spacing or heading size to compensate. We need to ‘refactor’ the page but we never say that and people who don’t know the science behind how the eye works don’t think about it. Colours matter. Hierarchy matters. Visual relationships matter.

It’s our job to explain ourselves better.

Some word clouds that explain different ways to talk about other common Design labels (mentioned below)
Slide 13: The link between ‘creative’ and ‘magic’ may make us feel special, but it results in less respect for what we actually do, which is Science

“Creative” is a term that is so closely related to ‘magic’ or genius because for people who don’t have good lateral or divergent thinking skills, it seems that way. It also makes us feel good, to identify as ‘creative.’

But, I never call myself a ‘creative’ because I don’t do magic, I do Science. What I have are “strong lateral and divergent thinking skills”. I also have exceptional critical thinking skills, which is all about the ways I can take a large number of options and whittle them down to the best one suited to achieve the goals we’ve set out as a team to achieve. That’s what ‘creativity’ is, the ability to think laterally first, then apply constraints to critique those options to come out with the best one. It comes so naturally to us that we don’t even think to explain the process to others. Trust me, most people are really shit at it, too.

Show your workings. Explain your thinking.

As designers, we don’t like showing shit work. I mean, maybe no one does? We often begin designing something and only through designing do we evolve the idea to think of other pathways of exploration. We do this iteratively – expanding and critically thinking our way through the multitude of options to get to a ‘final design’ that we’re comfortable presenting.

A diagram showing all the possible pathways a design idea can take whilst highlight a 'througline' showing the 'correct' answer among all the possibilities
Slide 14: We do so much work in getting to a ‘final’ answer, but we never show it for fear of being seen as someone who’s fumbling their way through the options.

But, I can’t count the number of times where I’ve presented a ‘final design’ only to have someone say, “Hmm, this is good. But what if we tried this? Or this? Or this?” Which often ends in a dumpster fire of design by committee.

What I want to say in those moments, but never did, was “I already thought about all that, you idiot, why don’t you trust me.” But, instead, now, I say, “I already thought about this…” and then I bring up my Figma file that has 500 artboards on it because I’ve ‘saved snapshots’ of my iterated thinking. I can say, “here it is, and here’s why it doesn’t work in the way you haven’t thought about yet because I’ve already done the thinking!”

Boom. Mic Drop.

The key takeaway? Keep your iterations and show them when you need to. It’s an incredible amount of work and skill that we hide from our co-workers for fear of looking a bit shit because we didn’t come up with the right solution the first time. Everything is iterative and your artefacts can help you have those conversations.

Developing your non-confrontational why

No matter how much time we spend, and no matter how much progress we make in climbing up the strategic ladder, there will always be times where we’re presented with a decision or direction that was set in a conversation we weren’t included in.

Purely decorative
Slide 15: I say ‘sort-of’ stolen from Mike Monteiro here because he talks about getting comfortable with asking why and saying no. I think one way to comfort is to develop your own way of doing it that builds allies instead of making enemies.

There are two ways to ask why that decision was made.

“Sorry, but why wasn’t I part of this conversation? I told you that I would prioritise being in that meeting and you left me out of it.”

Oof. Who’s going to respond positively to that? There’s accusation, attack, anger, and defensiveness all rolled in to one question. But there’s another way to do that and it’s grounded in a combination of vulnerability and curiosity.

“Oh, thanks for that. Sounds like it was a productive meeting. I reckon I could help there. Do you mind just sharing a bit more context behind the decision? It’ll be helpful for me in designing something that works.”

And with something like this, we’ve opened up an allyship. The Designer gets the background and context they need to go off and Design something that might actually end up revealing a problem that no one thought about in that original meeting.

Suddenly, when a presentation comes around, the Designer can present their deeper thinking, sometimes challenging those original assumptions, and then use the critical thinking of the team to make something better than what was ‘handed down from on high.’

This is a simple and contrived example but it’s really there to illustrate a point – there are two way to ask why. One way build friends and your reputation, the other creates enemies and further isolates the ‘designer’ to someone who’s a disgruntled team member. In all cases, it’s better to choose to be the former.

In my experience, finding your non-confrontational why (and also finding your non-confrontational “no, that’s objectively not a good idea”), starts to dismantle the ‘commercial artist’ stereo type of designer in co-workers minds. The next time an important strategic meeting happens, people like your Product Manager are more likely to request your presence in it, even if their boss doesn’t think it’s useful. That seat at the table we’ve been wanting is beginning to feel a little warm, right?

But, it’s important to note here that everyone is different and what feels natural to me will not feel natural to another. Designers need to work out what works for them, and for their team or the people that they’re trying to work with. It’s been 20 years of trial and error for me, and I’m still learning. It’ll probably be the same for you. Doing a course in conflict management, or even reading a book about it like Conflict without casualties will help.

The limits of your language will be the limits of your world

Designers seem to have the most power and influence when they are accepted and seen as a knowledgeable generalist. But, to be that knowledgeable generalist, we need to learn a few languages.

Purely decorative
Slide 16: Designers are translators, bridging the language divide between those we collaborate with to build stuff, and those we build it for

Learning the language of the science behind why we make decisions helps us provide reasons for our work. If we have reasons for our work it’s far easier to have a non-confrontational conversation about which solution or solutions meet the goals we’re trying to achieve. After all, it’s not designer versus the world that we want, it’s “here’s the science, do you really want to make a decision that goes the other way?”

Design is based in science, even though many of us enter it from the ‘arts’ angle. There is the science of psychology – unconscious bias and the limits of human capacity – are the materials we work with. For example, when we’re designing for ‘delight’, we can quote the peak-end rule to describe why one solution will work better than another. The Laws of Ux by Jon Yablonski is a great reference guide for you to begin building that vocabulary.

We need to be, of course, experts in ergonomics. A person can only hold a device in so many ways and thumbs and fingers can only stretch so far. Environmental factors like, “are they driving” influence this greatly. These limitations drive our choices about where and why we place buttons and key information as well as how people interact with the non-visual interfaces we create.

Business is the language of our employers and product managers. Building a language around economics and business is important so we can communicate with them and describe our designs in those terms. Increase revenue, reduce attrition, maximise retention – all those things are important to be able to demonstrate to our business-oriented colleagues that we understand where they’re coming from and that our seemingly simple design artefact has considered them throughout the process.

And, whilst we don’t need to know how to code (unless you really want to), knowing what our developers talk about when they say words like CDN, Azure, HTML/CSS, React front-end, performance, and so on only makes us more able to consider their constraints and push their boundaries where it’s appropriate to do so. The one thing I love about most developers I’ve worked with is that they love sharing what they know and often what they know or love makes for a better experience for the people we’re designing for – they just don’t know how to translate it to a ‘user experience’.

Finally, there is domain specificity. Whether you’re working in climate, housing, social justice, agriculture, health, education ,law or wherever you find your interests sparked, knowing the acronyms or the technical terms are critical. They are critical to building strong relationships with domain experts (I spend a lot of time listening to how smart professors profess to be), but also to ensure that those technical terms don’t leak through an interface and confuse the hell out of users. I had a conversation earlier where, in our meeting, we use the words “enteric fermentation”. Another way to say that is cow burps and farts. Which one feels easier to understand given everyone here isn’t an environmental scientist?

So, here we are, ‘designers’ for whom most of our work is actually translation. The bigger our vocabulary across not only our own specialisation of design, human factors, psyschology and so on, but also everyone else’s, the more able we are to do our jobs – make things that don’t harm people and improve their lives instead. We also begin to sound way more intelligent than ‘the person who makes things look nice’ and it’s those ‘smart people’ who, again, find themselves in meetings that decide how the world will run.

Write my thoughts down (or at least what I think is true today)

This was and continues to be the most transformative part of my practice and it’s both one of the easiest and most difficult things to do – write down what you think is true today.

Purely decorative except for the inky drawing of a pencil
Slide 17: Start a blog or journal. Now. Today.

Writing is linear in nature. That means that, to write well, you have to order your thoughts. But, it works a bit like design. As soon as you start writing, you have more thoughts, and those thoughts generate even more thoughts. This is writing’s strength because it forces you to sort it out. To separate those intertwined threads and make them clearer ideas.

Keeping a blog going for over 15 years has and continues to be the most transformative act in making be a better designer.

Writing also helps improve your vocabulary and, over the years, it creates a snapshot of your ever evolving knowledge. I look back at some of the things I wrote over 10 years ago and think, “Gee, you were an idiot.” But then there are other thoughts I re-read and think, “Woah, that’s really insightful.” Writing begets more writing, which begets clearer thinking, which makes you a stronger thinker and presenter.

This talk tonight is a combination of 4 different recent blog articles and I can only spin up a talk like this because I’ve already done the writing to clarify what I think to be true right now through… you guessed it, writing down.

So, start a journal. Start a blog. Start a collection of index cards or post-it notes. No matter how much I know that privilege can change things, I know one thing – writing will supercharge things for you.

Know thyself

So far, I’ve talked a lot about influencing externally. How we can have better conversations with others to get that elusive ‘seat at the table’ if we want it? But, one of the most important things to understand, if we want our work to be sustainable, interesting, and exciting to us is to get to know ourselves and begin to be comfortable with the strengths we each have. Once we know those things we can find the environment that suits us and in which we thrive.

An image of a cartoon woman looking at herself in a mirror
Slide 18: Whilst it’s important to work on external influence skills, it’s just as important to find out who you are and get comfortable in your skin.

Benson talked earlier about Design Ops and Design Systems. You DO NOT want me in charge of that. I’m terrible at consistency (when things stay the same for too long, I need to shake things up). He also mentioned reducing the amount of context-switching for designers – creating space for deep work. And, whilst I agree that I need space for deep work sometimes, I actually find that having 4 things on the go at once is better for my problem-solving capabilities and my general energy levels than having just one problem to solve for weeks. It’s why I avoid large enterprises like the plague where work can take months to ship and prefer start-up work, often 4 or 5 different ones at time. Iterate fast, frequently, and learn.

None of these things are morally good or bad on their own. I used to but no longer envy people who have different skills than me. Now, I just know the world needs that diversity. We need the Megans and the Bensons of the world to do the work they’re doing in these large orgs. There is genuine social impact there because they’re working at huge scale. Come up with a new interaction pattern and ship it to a few million people around the globe, you will change how the internet (and society) evolves. The world also needs folks like me because I do talks like this from my years of experience in changing jobs and roles every 3 months or so because what I love is working in science, immigration detention, electric vehicles, VR, education, health, human services, and now climate.

I have this saying – it’s difficult to grow a mango tree in Melbourne. You can do it, but you’ve got to put loads of support around it; plastic covers in Winter so it doesn’t die of frost, and then you have to water it relentlessly in Summer when it’s dry and Melbourne gets no rain. But, no matter how much you care for it, the mangoes grown down here are never sweet and juicy like the ones grown in Mareeba, Queensland. It’s just not a plant that’s suited to this climate. It doesn’t belong. And we’re OK with that. Well, most of us are.

Designers are just plants with complicated emotions. For some reason, we believe that we should ‘adapt to anything’. Having trouble at a large enterprise, just work harder, or build better relationships, or blah blah blah. Put that plastic sheet over yourself in winter, drink more water in summer. But you still won’t be a juicy mango.

The alternative is to work out what type of plant you are – enterprise/startup, research/visual, code/strategy etc, who you need around you to compliment you, and then go looking for an environment in which you will simply thrive because you’ve been planted in the right spot.

For example, my co-founder and I are brilliant compliments to each other. She has a background as an engineer and is detail-oriented, thorough, loves a good spreadsheet. She did budgeting for our business when she had covid because, for her, that was the ‘easy thing to do’. But, set her a task to connect with randoms on LinkedIn to grow the network, or pull together and execute a content campaign to grow our presence in the market and she quivers with fear like I do when I see a spreadsheet with a million cells. But I love doing the networking stuff. It’s easy, energising and fun. So, we’re doing this work together, and the business (and our families) are better off for it.

In the end, it’s about minimising waste

And so, I return to what designers do, not what we are. We are victims of our own identity and history. We keep messing around with titles – Web Designer, Multimedia Designer, UX Designer, Service Designer – all attempts to dig us out of a whole dug by our historical connection to commercial art in advertising.

There's no time, raw materials, or energy to waste
Final slide: And, it’s time to go home

But, from my experience, we are indeed some of the most valuable brains in society. We can think laterally and critically at the same time. We have innate curiosity and compassion which drives us to do this sort of work in the first place. We are, often, exceptional visual communicators. Let’s not waste it and the raw materials that make the world go round. There simply isn’t enough of it to waste.

Thanks.

October 2022

Hybrid working isn’t a middle-ground

If we’ve learned anything over the past 24 months is that if everyone is dialled into a call individually, calls work better. With good facilitation, things are more inclusive, equal, and fair. Great meetings with loads of vision and lateral thinking can happen over a video-conference. Mics don’t always need to be off, or on. Neither does video. There’s a time and a place for all of those remote meeting settings so prescribing ‘a company rule for everyone’ doesn’t work.

You know what else doesn’t work? Two or more people dialling in from a shared webcam while the rest of the meeting participants dial in individually. No amount of training or self-control has been able to discipline the two (or more) co-located people away from engaging in a more fluid, richer conversation together at the exclusion of those who have dialled in. Body language is rich, turn-taking is slicker, the centre of gravity of an in-person conversation is so strong that it simply makes it much more difficult for someone else to participate when they aren’t in the same room.

So, where does that leave us? Well, if just one employee has to dial in, it leaves us having to support distributed working. There’s no middle ground. People need good AV equipment, good remote facilitation skills, an understanding of how turn-taking works in online video calls – they (and especially business leaders) need to know how to work in a distributed way. If anything, the idea of saying, ‘we’re hybrid’ sets up an office environment for failure not success. Things will only get harder until we reckon with the underlying question – what’s the office for, now?

What’s the office for?

The office used to be a place where managers would sit, attached to a factory, and make sure workers performed their jobs. But with knowledge work – work that simply requires a laptop and phone – work doesn’t happen in a factory anymore. So, what’s the office for now? Why do we think that ‘returning to work’ is synonymous with being at a particular place for a particular time.

Maybe the office becomes a meeting place? Maybe it’s a place for people who don’t have great working-from-home setups to get some distance from their home so they can work in an environment that’s more ergonomic and conducive to better focus for them.

Maybe it’s a place for people to have focussed collaboration space and work through gnarly problems together – problems that are novel, highly collaborative or where the multi-sensory component of the get-together is important to the outcome (like training). Or, maybe you think it’s still for leaders to ‘watch over’ their employees to make sure they’re still doing their job. But, if that’s the reason, then hybrid won’t work for you either – 100% in the office is probably more your jam because what that says to me is that you don’t yet trust people.

Some examples for re-thinking ‘the office’

Right now, what seems obvious is that to support distributed working well but also leverage the benefits of a place that many of us can decide to use at the same time, the office could be setup differently; to enable individuals to sit next to one another, without risk of background noise or mic crossover. This is easily achieved with some noise-cancelling software and a decent mic (call centres have been doing this for ages, by the way).

This idea provides a way for everyone to dial into remote meetings individually, regardless of location. It makes it inclusive for those who can’t make it in that day. Then, when everyone leaves the meeting, those who chose to work from the same place, say, ‘the office’, can still go to lunch together and enjoy the benefits of in-person time.

Perhaps pairing this idea with optimising the design of the space for larger collaborative group exercises as things head back towards something that resembles normal – work that enables experiential, novel, or highly-collaborative – gives ‘the office’ a different but more useful purpose than trying to cram everyone back into individual desks, only to have them all wear headphones anyway because open plan offices are terrible for concentration.

For some businesses, a communal space for employees still feels important – there are huge benefits to this, but it’s not an ‘office’ anymore. Words like ‘collaboration hub’, ‘meeting place’, ‘homebase’ feel a little more descriptive and true of how those ‘office spaces’ could be used now. No matter what anyone calls it, what it truly means is that there’s no such thing has hybrid because as long as we choose to support one person dialling in, we all need to have the distributed working skills to make it work inclusively and fairly for that one person who couldn’t be in that day.

Sure, there will be times that full teams can work together, at the same place and at the same time. Supporting distributed working also doesn’t mean giving that up. But, if teams also value inclusivity, even though they may have a space to share that’s sort of near where their employees live, it doesn’t mean they don’t need to invest in good tools, practices and processes to support everyone and not just the few who live within a commutable distance to a common space we used to call the ‘office’.

‘Hybrid’ is a false hope

The examples I give aren’t exhaustive, but it worries me to see that the leaders seem to be thinking that the decision to adopt a ‘hybrid model’ seems to imply some middle-ground. A little bit of a relinquishment of the absolute power an employer used to have over their employees. But, as businesses try to grow out of a pandemic, it’s the employees who have the power now, and it’s up to businesses to adapt.

A hybrid model doesn’t mean less work, it means more. Even partially distributed teams means you need to understand and nail how distributed teams work together, properly.

Hybrid work as a middle ground implies that the two ends of the spectrum (all remote, or all in office) are somehow more difficult now. But, to be in the middle means you need an even more nuanced understanding of how work works, what offices are for, how people behave in environments you can’t control, a recognition of the blurry lines between work and life that have always been there but are now more apparent than ever, and that even more elusive value for companies – trust in your employees.

What’s emerging is that for knowledge businesses, there’s far less physical time and effort required in leaning into distributed working as the way forward, regardless of whether 40 people happen to want to work from your collaboration hub for a day or two a week.

Using design to adapt to a post-pandemic workplace

I’ve spent quite a lot of time over the pandemic years helping organisations adapt remarkably well to a distributed ways of working and who are working harder than ever to get better at it. They are experimenting daily, working across time and the country to figure out what good looks like for the people they have employed – with all their neurodiversity and specific environmental needs. The results are happier employees & better quality work for the business. All that’s preventing every knowledge business from doing the same is fear. Most of the time it’s fear of ‘losing control.’

So, if you’re a leader whose curious about how your organisation could better leverage the benefits of distributed teams and the benefits of having ‘the office’, I’m happy to spend an hour or so listening and sharing what I know.

September 2022

The problem with being problem solvers

Design has, for a very long time, been in an identity crisis. The proliferation of job titles, its mixed history with art and artists, and the mystery that surrounds the non-linear, difficult-to-codify nature of the process means that we’ve all struggled to explain what we do to others; not just to someone from outside the industry, like my mum and dad, but to those within it.

Because of the difficulty associated with capturing what Design is, it feels safer to further abstract our explanations of our job until we’re left with phrases like, “Problem Solver”. Generic and understandable.

But, the problem with the label, problem solver, is that it makes obvious a bias that we’ve all been guilty of – a designer sees the world as a bunch of problems needing a solution instead of a complex world that’s simply difficult to understand and predict.

If there’s a problem, I’ll solve it

Stuff annoys me all the time. I hate the way I need to consult the user manual of my air-conditioner unit every time I need to re-program it because it makes no intuitive sense to me at all. I hate the stack of dishes in the company kitchen that sit right in front of the sign that says “Please put your dishes away”. I hate Instagram and Twitter for holding my attention against my will. I hate that the world is broken – climate change, war, genocide etc – the list is endless.

And so the optimistic capitalist within me says, “Great, so many problems, let’s turn them into opportunities!” And so we do, we whack a “How might we” in front each problem statement:

  • How might we allow people to program the air conditioner easily?
  • How might we ensure people to keep the company kitchen tidy?
  • How might we get our attention back from Twitter and Instagram?

We follow this prescribed pattern and ‘ideate’ until we’ve reached the highest order of problems (the most complex ones): How might we fix climate change, stop war, prevent genocide?

Wait. Really?

Sure, the world is imperfect and, as we bumble our way through evolution, some problems will go away and others will take their place. There will, without a doubt, always be problems. Some will be simple ones and others will be more complex. Thank goodness we’ve got designers, nay, wait, Problem Solvers, to help us squash them as they emerge. Right?

The intent to solve vs the intent to intervene

If designers continue to inhabit the title of Problem Solver, what we end up with is creating an identity and culture with a default intent to solve – to identify the problem, hone it, invent solutions to it, and take action. And sure, most of the time, the problem goes away. But, inevitably, another (or more often than not, other(s)) comes along and replaces it. So, which problems do we choose to solve? Which ones can be solved?

This action-oriented mindset – taking action and changing something in our environment – in combination with biasing towards ‘simple’ problems gives us feelings of progress & achievement. It feels really good to change something. We’ve proactively applied our intellect, which manifests in, hang on a minute… candy crush? The air-fryer? This cup printer?

Because simple problems are easier to ‘solve’, we seem to be focussing more and more on the inane optimisations of already wealthy, comfortable lives instead of using our incredible deductive and inventive capacities for something more important. Or, worse, we try to apply formulaic processes and methods that are successful in solving simple problems to complex ones and that’s where we run into trouble. But, hold that thought for a moment, let’s discuss medicine.

How the health sector ‘solves’ problems

In health, we’ve already recognised and use a different method of ‘problem-solving’. In health, there are no solutions, only interventions. There is a culture of understanding that drugs and therapies for humans aren’t ‘solutions’. There is an understanding that what may fix one thing for someone might do more harm to the individual, someone else, or a whole community. Because of this, we’ve developed the clinical trials system – a rigorous (not perfect) method for understanding how a health ‘solution’ may impact one or more human lives.

Clinical trials have various stages, from non-human to human. From small scale to large scale. It tries its best to do things like double-blind testing to remove bias from the process so that the understanding of the intervention is as ‘true’ as it could be for any given time. Again, this isn’t a perfect system, some interventions cause the need for other interventions and so on, but it’s the ‘safest’ one we’ve got right now. It’s an acknowledgement of the perpetual tweaking and change that is baked into the culture of improving healthcare. It’s the sort of process that’s robust enough to deliver the world a vaccine in a pandemic and save millions of lives.

The language we use shapes the culture we create.

This sort of process or mindset doesn’t exist in software culture. But what if it did? What if software culture had the process of a clinical trial – one that measured the holistic impact it had on humans and non-humans, at different scales over time before it was released en masse. What if it wasn’t just focussed on user acquisition and company growth? What if the way we thought about problems wasn’t scoped by what the shareholders are looking for next quarter? And why don’t we see protesting in the streets when a software platform like Tik Tok goes viral – scaling to billions of users in just a few weeks, but we seem to have a problem with a vaccine? What if software culture started to think of things not as ‘solutions’ to problems, but interventions to them?

The intent to intervene, not solve

Providing a solution implies an end to something – once a solution exists, the problem doesn’t. Often, the list of problems we started with is so long that after one is solved we just pick a problem off the old list and start solving that next.

But, if we start to think of ourselves not as problem solvers, but ‘interveners’, a few things happen (well, they happen in me, anyway):

  1. I start to sound a bit annoying and arrogant, and less like a ‘hero’. What gives me, a ‘professional’ designer, the right to intervene in anyone’s life in the first place? Who asked for my crappy opinion or ‘hunch’ on something? How do I know how to intervene with tools or services in the lives of people I don’t understand? Changing one word helps me see more clearly and returns me to the human-to-human relationship that exists between designer and user; to intervene in anyone’s life, we must understand them and their community, deeply, and also receive their permission to mess about with that, don’t we?
  2. It provides a level of responsibility for the unintended consequences of our interventions. When we’re building and releasing tools and services, we are the ones who decided how and why to intervene in someone (or a population’s) life. In any system where humans are involved, we are working with complex adaptive problems, no matter how small the change. Health already understands this. Tweak a human’s environment in one way and, sure enough, humans will use that tool or service in ways no one could imagine. By acknowledging to ourselves that what we’re doing is ‘intervening’, not solving, we’re admitting to ourselves that there will always be an effect (positive and negative) caused by our intervention. Because of this, I’m likely to be a little more careful in how I propose that we intervene and to what extent. Perhaps we’d start to change the scale of how we release our interventions, catching any harm earlier rather than after we’ve changed the brains and neurology of millions of people?
  3. There is no “done”. While it might feel good to address a problem by intervening in a person’s life in some way, the very nature of intervention is that nothing is done – in fact, by intervening to solve one problem, history shows we’re just creating more and different ones. In some sick way, the ‘solution’ mindset is keeping our industry alive and growing. We’re creating problems, not reducing them, so we need more ‘problem solvers’ to help solve them, right? Another unintended consequence I suppose? Who benefits from this mindset, then?

Quite simply, by using intervention terminology over solution terminology, it keeps the mind open and the ears and eyes more aware of our actions; both short and long term. By recognising that what we’re doing is intervening, not solving, perhaps we’re more likely to adopt a listening-first culture, one that moves slower and fixes things; just like it is in the Health sector.

If we’re more aware of the systems and lives in which we’re trying to intervene (because let’s be clear, most of the time, no one asked us to do that except someone who sees a way to make a profit from a community), we may approach problems with more empathy, understanding, time and consideration for the lives which we impact.

If this all sounds a bit dramatic, let me illustrate the importance with a case study.

Case study: Mosquito nets in Africa to ‘solve’ malaria

A smart cookie wants to prevent death from mosquito-borne malaria in Africa. There’s a very cost-effective and easy solution to this – mosquito nets. And so, they are deployed to those who need it most. A truly simple life-saving device. Problem gone. Or is it? Because now people are using those nets for fishing. They are infused with insecticide and the holes are much smaller than regular fishing nets. This means not only are they poisoning their food supply, but they’re also destroying the ecosystem on which they depend by pulling in fish and biodiversity that ordinarily wouldn’t be caught by ‘regular’ fishing nets. They’re also being used for chicken coops, football goals, and wedding veils. The problem now is how to stop the problems we’ve created by setting out to solve a different one.

Could calling these nets an intervention changed these outcomes? Might we have thought a little more broadly about the consequences of giving such a versatile tool to a human? Might we have rolled it out differently – first at a smaller scale to learn, more quickly, about the positive and inevitable unintended consequences?

So, what’s Design for?

My instinct, when I hear the unintended consequences of the mosquito story, is to solve it. What if they did X instead? What if the process was more like Y? That could have easily been avoided if… and so on. It takes considerable effort for me to stop. And think. Yes, this is an obvious problem to solve, but I need to take more time to understand it. What’s the cause and effect here, in this community, for these people? How do systems of food security, biodiversity, health, and education intersect or overlap – that’s a lifetime’s work and the system will keep changing as soon as we begin to understand or interact with it.

The truth is, I, like many other designers I know, are chomping at the bit to change stuff. I also don’t know a single designer whose actively set out to do damage to anyone. All we’re trying to do is help, we say. But, in the big scheme of things, our lives are short. And because our lives are short, we haven’t got a lot of time to make the fulfilling impact we’d like to make. Because of this, we bias toward action over consideration, of solving rather than understanding. People pay us to solve. They don’t pay us to tell everyone to stop, re-consider, take a little longer, try something small and see. At the end of the day, we gotta eat just like everyone else and if company X won’t ‘solve this problem’, company Y probably will. Won’t they?

I often wonder if designers could work together toward something bigger; something more… intergenerational. What if I spent my time understanding a system, and shared that understanding with another? Set someone else up for success? What if we watched and documented, together, across generations, over a much longer time horizon? What if designers helped to create a human organisational memory – a way to visualise the world and its complexity – the interconnectedness of all things? How might I intervene in my own life and community so that we can nudge us, the problem solvers, in a different direction?

What’s next in Design?

Should we have a clinical trials-like process for software products and services? Software products are fast becoming the primary tools and utilities of our time. Safety features are required in cars and other physical tools and services that intervene in our lives at scale every day; maybe it should be the same with software?

The downside of course is that regulation and systemic change of any kind takes a really long time. And, it’s often opposed vehemently until there are enough deaths or enough destruction for governments or other regulatory bodies to take action. We don’t have that sort of time. We’re also not measuring the non-death impacts of thoughtless or unconsidered software (i.e. think mental health at a micro level, democracy at a macro one).

Perhaps the simpler thing to do is to change our culture, one small behaviour change at a time. If language does truly shape culture, then the “How might we fix, solve, remove, address…” style of question – that ‘absolute’ and ‘finite’ terminology we’ve become accustomed to using as provided by ‘thought leaders’ like Google Sprints and Lean Startup books all over – might be better phrased as, “How might we intervene in …”.

The curious thing about “How might we intervene?” is that it provokes a simpler and more important question for any problem we’re staring down – “Should we intervene here at all?” We may just find that doing nothing, in many cases, is the best intervention, and this might free up some space in our brains to find what is likely – that there are more important fish to air-fry after all?

April 2022

How to commercialise research

There are lots of things I love about academics. The passion they have for their work is one of them. So to is their understanding that exploration for explorations’ sake is a useful and valuable human endeavour. Solving chunky problems that others have never before solved, combined with passion and exploration is what helps to move us forward as a species – better health, better living conditions, better overall.

Academics don’t need to be founders: taking research to market doesn’t mean ending an academic career

But, their passion for exploration means that something else has to give. My experience of working in early-stage research commercialisation projects has shown that academics view making money from their work as ‘impure’. Commercialising research turns ‘exploration for explorations’ sake’ into ‘exploration for profit’.

Most academics I’ve known are not motivated by money, but by status and prestige – a tenure position, more grant funding, more published papers. That’s not a bad thing. We need those motivators. But I’ve also seen how commercialising an academic’s work has scaled their impact and affected hundreds and thousands of lives for the better. It’s not difficult, in fact, academics have already done the hard work.

A graph showing that academics need to be involved in the early stages of commercializing their research but not a lot afterwards
Commercialising research doesn’t mean an academic needs to become a founder. Surrounding oneself with the right people means that they can go back to exploratory and discovery (and publishing) quickly.

Does the thing I discovered solve a problem for somebody?

The thing about academic research is that curiosity leads to knowledge and, often, knowledge leads to a solution that no one asked for. In design, we work the opposite way – understand the problem then find a solution to fit the problem. Neither way is better than another, the important thing is the match – does the thing I discovered solve a problem for someone?

Problem → Solution can easily also be Solution → Problem

The most critical part of this step for academics and researchers is understanding the difference between real-world solution and lab-tested solution.

Pharmaceutical research, and other regulated industries already has a defined pathway to market (trials → approvals → market) so I won’t cover that here. The type of academic discovery I’m talking about is non-pharmaceutical. And, from years of experience, I’ve seen first-hand that trialling a solution in a controlled environment (like a lab), and an uncontrolled environment, yields different results.

It’s important to understand if the product or service that’s been discovered from research actually solves a problem and for whom. It’s most important to be as specific as one can about the who → problem → solution as possible. The who bit, in commercial language, defines the ‘market’.

Are there enough people willing to pay for their problem to go away?

If an academic’s solution tests well for a particular cohort or cohorts in the real world, the next step is really quite simple and rather scientific. It involves answering two questions:

  1. Who can benefit from this solution?
  2. How much are they willing to pay for their problem to go away?
A graph showing research methods to do a basic pricing strategy
With a little research, it’s not that difficult to understand what people will authentically pay for their problem to go away; even for entirely new product categories.

This is what designers and product people call “Commercial Research” and/or honing in on “Product/Market” fit.

Every founder and/or academic thinks that their work is the bees knees, if they didn’t they’d begin to question why they’ve spent (sometimes) their entire lives working on this thing that isn’t very good. And so here’s where a healthy dose of external parties can help – to validate or invalidate the assumptions made by those who believe in their work. It’s essentially peer review – and all academics know how useful that is.

How many is enough?

The ‘enough’ question (i.e are there enough people willing to pay enough for my solution), in the context of commercialising research, is typically considered at two levels:

  1. Will I earn enough money to sustain the product over time?
  2. Will I earn enough money to grow and make the product better, over time?

As designers and commercially-oriented people, we tend to err towards the second one – mainly because, when things become products, competitors exist and a business wants to remain competitive. To do that, one needs enough money to change and evolve over time.

What’s the optimal ‘business model’?

If it turns out that there are enough people who are willing to pay for their problem to go away, the next question becomes about the mechanics of how that might work. Digital products have all sorts of models: subscriptions, one-off payments etc, and I won’t go into that detail here, but this isn’t a difficult problem to solve with what academics do best – that’s right, more research.

The business model that a product begins with often changes and morphs as the solution is used – what’s good fuel for the child isn’t necessarily good fuel for the adult. And so the ‘optimal’ business model is an iterative process. It requires constant checking and validation of previous assumptions up to this point, an understanding of what competitors may be up to, and an eye on the future.

But, by this time – the research has been commercialised and it starts to live a life of its own; just like any ‘normal’ business: building teams, salespeople, staff, revenue/profit/expenses etc are all part and parcel of the transition of research to commercialised research. In my experience, this is the bit that scares academics who, primarily, want to keep being academics.

Academics don’t need to become founders

The pathway to commercialising research can seem daunting for academics who, in the end, just want to keep being academics. But academics very rarely work alone in their research. They surround themselves with professors, graduates, and other researchers who help them make better work and think through their work more deeply.

Commercialising research, just like doing normal academic research, works best with a team around the founding researcher. A good designer and good product manager can be an academic’s key to unlocking their research benefit to the wider world and the academic can keep doing what academics do best – to use their curiosity and passion for exploration: to find out the next thing that will improve the way we humans be humans.

April 2022

We are but gloriously broken robots

It’s 2018, and in 34-degree heat, my partner and I trekking through the still forests on Lastovo, the furthest island from the Croatian mainland that is still considered Croatia. We’re looking for Crkvica Sv Luka (St Luke’s Church) – the oldest church that exists on Lastovo. I still remember the soft, golden light of that afternoon, the peace that comes from being miles from anywhere. The trees tower above us, and, as we climb higher up the mountain, the treetops thin out, baring dusty rocks (and our skin) to the searing heat. We turn a corner and suddenly, there it is – the church.

Now, I can’t be sure whether what I’ve just written is factually accurate. If you ask my partner, she describes it as a very different moment. She remembers the violent thirst and hunger from setting off without eating breakfast and following signs that aren’t in a language we can understand. She thinks I describe, with intense inaccuracy, the romance of that afternoon, I still believe it wasn’t any other way. But instead of being frustrated with each other about not being able to remember the factual events, we’ve come to realise that the debate itself – the differences in our perception of what happened – is indeed the gloriousness of that shared experience.

We’re what some people now call “Urban Sketchers”. We carry some basic art supplies with us, mostly pen and watercolours, and we use those tools to document what we see. Where some people rely on cameras and photos to trigger memories of places they’ve been, we decide to tell the story of our travels through our purposely skewed perceptions of the world.

Back on the mountain, struck by the church, we both pick a spot to sit – I choose a particular rock that gives me, what I think, is the perfect angle. She picks a different rock, but for the same reason – to her, the one she chooses is perfect. We are only 1 metre away from one another. We pull out our art supplies and begin, with remarkable inaccuracy, to document what we see and feel.

For about 30mins, we’re in our own worlds. No talking, no sharing. It is just each individual and the church. The materials and colours we choose to capture are inherently and unavoidably biased. Accuracy of the reflected light is not the goal – a photographic image will capture that just fine. No, what we’re doing is capturing a multi-sensory experience across a span of time that far exceeds the snap of a camera shutter.

We are including and excluding things that will build individual and intense neural pathways to form deep memories of this moment. We will not recall the moment as captured by the software-skewed process of digital photography, we will recall our perception. The walk to the church, the hunger we feel, the change in light as the sun moves overhead whilst we attempt to capture elusive light and shadow as it changes from second to second.

As we finish our sketches, it feels as though we’ve spent an entire afternoon studying this one moment. In fact, the objectivity of the clock tells us it’s only been forty minutes. We emerge from our deep, individual focus and concentration for the glorious moment of looking at each other’s work. The page on which we’ve scribbled becomes a rosetta stone – a tool that we can use to more deeply understand what’s important to one another and how each of us views and feels the exact same experience, just one metre away.

The cover of Matt's book, Eric the Postie
Above Left: My sketch. Above Right: My partner’s sketch

My partner captures a saturated golden light that, despite staring at the exact same thing just moments before, I swear I did not see. Instead, I was taken by the blushed pink wall that she swears was just not there. For her, the stone pathway leading toward the church, cast in deep and dappled purple shadow, the one we followed to arrive there, was significant. For me, the long, golden, spinifex-like grass a much more significant feature. What grass? she says to me, even 2 years later. There was never that sort of grass there, she swears.

And in a world that strives for scientific objectivism and perfectionism as the ultimate goal to work towards, the simple act of sketching has shown us that there is immense and overwhelming beauty in our imperfect truths.

When we return home, we are loaded with filled sketchbooks, stories, and photographs (yes, we still take photos). What’s most interesting about the process of sharing our holidays with our loved ones is that all they seem to care about are our sketchbooks. They will literally spend hours leafing through our books – comparing one image to another. Deep and interesting conversation is sparked when they come across something I’ve decided to sketch but she hasn’t, or vice versa. “Why was this important or not important?” These conversations don’t happen with photos.

Sketching together in this way gives us both a richer, deeper understanding of one another. It’s a constant reminder that however objective we think the world is, there are literally millions of tiny biological and contextual differences between how we both experience a moment in time. It has made us more patient and more curious with one another but also with the people and strangers we interact with on a day-to-day basis. Humans are, for want of better words, broken robots – imperfect beings with imperfectly tuned sensors. And in a world that strives for scientific objectivism and perfectionism as the ultimate goal to work towards, the simple act of sketching has shown us that there is immense and overwhelming beauty in our imperfect truths. And we prefer it that way.

March 2022

Let’s innovate!

I‘ve been in three different innovation teams created by large organisations, in very different sectors, and they’ve all started and ended the same way.

They start with a dream – we want to use our vast capital and resources to ‘start a start-up’ – to break free of the governance structures that slow down big organisations’ decision-making capability to move quickly to improve profit, people, and planet. And, they’ve all ended the same way – the market isn’t ready or not big enough for our innovative thinking and so the commercials don’t justify continued investment.

It’s a curious thing to witness. Three different teams. Three different organisations. Three very different types of tech. They only share one common characteristic – they all began with the technology and not the problem.

The problem with starting without a problem

Look, I get it, tech is exciting. Especially new tech. VR, AI, Web3 and blockchain – it’s all cutting edge stuff and it’s stuff that companies should have their eye on if they want to take advantage of it or defend themselves against possible disruption. I am all-in for exploring the possibilities of new and emerging technology to see when, how, or if it could be used to benefit the strategic goals of the businesses. It’s just – in my experience – it doesn’t seem to happen that way.

In each of the three innovation teams I’ve worked within it’s been the same story (simplified to make a point):

  1. Identify new or emerging tech
  2. Deploy engineers and a ‘head of innovation’ to explore it
  3. As they explore, they imagine ways the business could benefit
  4. Keep exploring further
  5. Repeat steps 3 and 4 until the money runs out

In this model, there is always a feeling of progress because what this model does is accelerate learning. And, when we’re learning, we feel we’re making progress. We feel as though we’re moving closer to the big imaginary lightbulb above someone’s head. We’re not sure towards what exactly we’ll arrive, but we feel arrival is imminent, so we keep going because we’ll know when we get there and ‘that’s what innovation is about’ (real quote, btw).

And, in many ways, I agree – these are all good things. I’m a big advocate of play for play’s sake. Of exploring without a purpose for a while to learn things that structured learning may not teach because of the limits imposed on it by boundaries that are inherent in structured learning. But the problem occurs when we’re constantly engaging in divergent thinking – wider and wider – without any sense of synthesis and reflection.

Alternative approaches to tech-led innovation

So, what to do? We want to enable play and exploration, but we also want it to unlock something for the organisation or the company, at some point or at various stages along the way. This is where innovation labs could benefit from either/or:

  1. Hypothesis-led testing and validation (aka. The Hare – Move fast and break things)
  2. Problem first, then solution second. (aka. The Tortoise – Move slow and fix things)

Hypothesis-led innovation

The methods that describe scientific exploration can be easily adapted to corporate innovation. The idea that one can postulate an outcome before beginning to explore it gives some really wide boundaries for innovation teams to play within. In other words, set a goal post in the far distance, then play and explore until we reach it. Then reflect. It’s not complicated. It’s not rocket science, it’s just science.

This isn’t reinventing the wheel, it’s just using the one that already exists, for many years, in academia and scientific research. It’s the structure that enables play, rather than restricts it. It’s also straight-forward:

  1. We believe that…
  2. To verify this, we will…
  3. And measure…
  4. We are right if…

So why is doing this well so difficult? What I’ve seen are four reasons:

  1. Not everyone is a scientist. Theories of change (of which hypothesis-led testing is one), fall over quickly because of people’s under-developed logic skills. Things like circular logic, and a magical belief in actions/tactics because of bias and assumptions.
  2. The change or ‘vision’ is multi-variant. It’s easy to compare change when you tweak 1 thing against another, but tweaking 2 or 3 things at the same time muddies the experiment and opens the door to all sorts of fallacial reasoning.
  3. Ego gets in the way. Things like confirmation bias and ‘avoiding failure’ prevent humans from seeing things objectively. And, often, failure means missing KPIs or OKRs that lead to promotions and a higher sense of self-worth.
  4. There’s no peer review in corporate innovation. Peer review (i.e. inviting colleagues to critique, objectively, the methods and theories of the working group) isn’t how corporate innovation teams are set up. Crossing ‘silos’ to engage people with no context is difficult and, even if it’s possible, we end up with ‘inventor bias’ where the working team asks peers leading questions like, “You would like it if this was invented, wouldn’t you?”

Hypothesis-led innovation is a robust process – well, it’s the best we’ve got. The foundations of science are based upon it. It’s just that a commercially-oriented culture where teams are measuring profit as an outcome, as opposed to an academic one which is, “Where do I get my next grant from.” makes it much more difficult, but not impossible, to do well.

Research-led innovation

While hypothesis-led testing is an appropriate, robust, and ‘active’ way to make progress in innovation, there is an alternative and that’s being ‘research-led’. The difference is subtle, but absolute.

In hypothesis-led innovation, teams rally around a theory and get to work quickly, often playing with tech and outputs to move towards validation or invalidation of their hypothesis. In research-led innovation the ‘act of working’ starts from observation, not action.

Research-led innovation begins with generative research that is grounded in behaviour, not attitudes. It requires an agreement between the working team that observing is valuable and a collective trust that it will lead to an positive outcome. It requires a deep sense of curiosity and open-mindedness about those outcomes because it may be that the team doesn’t learn what they expect to learn but, quite critically, they always learn something. That something, even if it describes a path that the team can’t follow, is valuable.

Research-led innovation requires a few things to do well:

  1. A definition of a space: this might be a type of customer or non-customer. It may be a particular activity, or a particular environment or domain. Putting a boundary around this is important for constraining the scope of observation – quite simply, it’s impossible to observe infinity.
  2. Excellent and diverse research skills: Experienced researchers who have years of practice recording unbiased actions, conversations, and other human behavioural factors is critical. This sort of contextual inquiry goes beyond non-generative research, like usability testing, and lives in the realm of ‘recording the unconscious.’ There are very few people who do this well.
  3. An opportunity mindset. A thing that most money-spending organisations have trouble with because the ROI isn’t clear, and often isn’t for sometime. Again, it requires an understanding that generative research *always* produces results, whether the funding team likes those results or not.

I’ve spent time sitting and watching people shop in supermarkets, drive trucks, operate in call centres, all without doing anything but watching and asking a few open-ended questions like, “I saw you just took that bread off the shelf, why that bread?”

The giant leap we all want to make comes from listening first, not acting.

Noticing the every day is an art and a skill. It requires patience, curiosity and faith – all traits that you won’t see on a job ad for innovation teams that are looking for ‘fast-paced, exciting, entrepreneurial qualities in their people. And therein lies the crux of the problem – the perceived value of observation, just as our First Nations People have done for 60,000 years is under-valued by the western and start-up mindset of acting before observing. Hence, we return to the comfort zone of hypothesis-led innovation that’s driven western science for hundreds of years.

The giant leap requires patience, not ‘action’

Innovation has a ‘brand’ – new, exciting, untetherable and exploratory. Steve Jobs in all black, secret missions to unlock world-altering technology and systems on the world. But its brand is, in itself, its own problem. Because what innovation is really about is change. What innovation goes looking for is ‘the giant leap’ – something that’s truly game-changing for the industry or problem space within which we’re operating. It’s got confirmation bias baked in – if the leap isn’t giant, then it’s not innovation.

But, to make a giant leap, don’t we need to exert lots of energy?

And that’s where we’re going wrong because the way to get giant leaps is, in fact, counter-intuitive. What the giant leap needs is patience. To stop, observe, listen, understand, first. It’s not sexy, energetic, or exciting. But, it’s only after this perceived ‘passive’ activity that we can act. Precisely and swiftly. To go from A, directly to G.

Until we begin to take a listening-first approach, those giant leaps aren’t likely to come. Instead, we’ll either end up with small, incremental change (which isn’t a bad thing but often not the goals of innovation labs), or, the one I’ve experienced in three different teams: no change at all and a growing distrust of the value of ‘innovation labs’ at all.

March 2022

Growing plants and people

My dad once told me that there are two ways to grow a garden. The first way is to pick the plants you love, regardless of which conditions they thrive in, and spend the time and energy supporting their growth. If you like mangoes and you’re in Melbourne then it’s possible to grow them, they just require a lot of care, attention, and work. You need to keep them warm and insulated in the cold Melbourne winter, and you need to keep them well-watered in the dry Melbourne summer. It is possible to grow a garden like this, but boy, it’s a lot of work for the gardener and the plant.

The other way to grow a garden is to take a bunch of largely random plants and flowers, throw them in the soil, ignore them, and see what grows. Some plants love the combination of the cold Melbourne winter and dry Melbourne summer. They will not only survive, but thrive, and best of all, they require little to no work. It’s possible to have a lush, beautiful garden in either scenario.

One way to look at growing a garden in the first way is a largely selfish, close-minded pursuit. “I like mangoes, therefore I will do whatever it takes to grow a mango in Melbourne.” The result is a lot of work and often frustration for very little (if any) yield.

Growing a garden in the second way requires a listening approach; an understanding and recognition of the plant itself – its preferred growing conditions – and the properties of the environment in which it’s expected to grow. The idea isn’t about what I want, it’s about making a match between the plant and its environment.

People may just be plants – we need sun, water, and food – but with more complicated emotions

In a people leadership context, I can’t help but come back to this advice from my dad about plants. It is possible for a person to survive in an environment that isn’t suited to their strengths, but they won’t thrive. It’ll take an immense amount of work (both on my part and theirs) for very little (if any) reward. One way to look at my job as a manager is about understanding the unique strengths of the people in the team and then going about finding or shaping the environments in which they’ll thrive.

Yes, sure, plants are not people. People can change and adapt in a way that plants can’t. That’s where a conversation can change things. If a team member comes to me and says, “Look, I want to learn the skills to help me thrive in an environment that I’m not naturally suited to,” that’s great! That’s when my job is to offer the right structure and support for that person to ensure that they achieve what they want to achieve – perhaps become more adaptable, more resilient, and more versatile. But, starting from a place of listening is the important first step. Understanding strengths and biasing toward those, rather than doubling down on trying to improve weaknesses as is the current default within a culture of individualism, is to me, a more positive experience for the person and the planet – whether your job is to help plants or people grow.

March 2022

What I know about building a design career

My career has been a series of fortunate mistakes, and I’m not sure it could have been any other way. I don’t know everything there is to know about building a design career but what I do know is that I love my job (even on the hard days). I’d love other designers to love their job to.

Careers, for the most part, can be accidental, but if I had my time again I’d consider the idea of designing my design career (in fact, I’m doing it for myself right now). And so, I offer this not as ‘advice’ that you can sue me over, but literally one perspective; stuff I know to be true from my own experience and what, with hindsight, I believe has led to where I am right now – a job I love with people I love working with.

Most career coaches will say, “Start with values”. To be honest, my 20-year old self, fresh out of uni, had no idea what values were, let alone self-aware enough to know what my values were. That may be different nowadays – the kids today are way more self-aware than I ever was. And yes, while values play an important role in shaping one’s choices, I believe firmly in experimentation first, values later. If you’ve never tried green tea ice-cream, how do you know you’ll like it?

It’s about direction over destination

At the moment, there are two things that very experienced designers end up doing:

  1. Building great products/services
  2. Building great teams that build great products/services

In parallel to this, very experienced designers can also be thought leaders – people who reflect on and share their deep experience with building great teams or products with the aim of helping the industry, as a whole, get better.

The pathways to get to these goals aren’t mutually exclusive, especially early on in your career. And, in my experience, the way to get to number 2 is to do number 1 first. Either way it helps to have a direction.

The pathway to learning how to build great products

One of the difficult things about Design is the lack of clarity on what skills designers even need. Stanford D School has a pretty good attempt, but it focuses very much on the abstract qualities of designers. Because they are abstract they feel pretty encompassing, but don’t really help early-stage designers whose focus is on building their Design toolkit so they can, in essence, do the work.

Right now, when I coach designers, I talk about 3 elements of the end-to-end product design process:

  1. Research – Experience in defining the problem
  2. Interaction Design – Experience in inventing solutions to the problem without the complication of graphic design
  3. Visual Design – Experience in translating the solution into colour, layout, typography etc.

These 3 are very bare bones. Other designers will say it’s missing stuff, it’s not broad enough, etc. But, for simplicity, these are the broad categories of the work that I think are important when you’re in the early stage of your career.

Finding a way into design

Most designers are drawn to Product Design from either end of the design spectrum.

There are the ones who begin their career as visual designers. They are typically strong in graphic design and often (but not always) land in creative agency environments that value the emotional value that’s often so key in marketing and advertising websites and digital products.

The other person who ends up being drawn to design are the science-oriented folks – they find their way via discovery and research. They are often analytical and deductive thinkers. Some are also naturally lateral in their thinking.

Both starting points are valid, neither is ‘better’ than the other. They’re just starting points and they often emerge from the natural strengths of the designer. The first question to ask of any early-stage designer is “which end of the spectrum are you starting at?”

Which end are you starting at?

Once you know that (for me, it was visual design), you know what you can contribute to any job right out the gate. You also know what you’re not naturally good at, or, in other words, what you need to learn (i.e. where you may be able to grow). The most important thing is that, once you admit your strength, it’s now a much easier pitch to any future employer.

Visual Designer

I have a great eye, attention to detail, and a natural strength in using graphic design to create beautiful and useful interfaces. My strength is visual design. Here, take a look at my visual folio to prove it. What I’d like to understand more about is Research and Interaction Design. Will I get to do that in the role you’re offering?

Researcher

I’m curious and great with people. I love doing the work to understand people’s problems and presenting what I find to others so that we can solve it together. I’d like to know more about the process of taking those insights and turning them into solutions. Will I get to do that in the role you’re offering?

It’s pretty simple: Here’s what I can contribute. Here’s what I want to learn.

The next bit can’t be coached because it’s all dependent on the job you’re applying for but there are often two ways that teams operate, and it normally depends on how big the team is.

  1. Small teams. They offer an opportunity for generalisation (or breadth). Designers in small teams tend to work across the whole design spectrum, end-to-end, because they have to.
  2. Large teams. They offer an opportunity for specialisation (or depth). Designers in large teams tend to break down the 3 broad areas into specialised areas. There’ll be a Research Dept, and UI/UX department that sometimes, but not always, includes UI design as a separate function.

Specialise or Generalise?

Because of the principle of direction over destination, there’s no right answer to whether an early-stage designer should generalise or specialise. Both are useful.

By specialising, the designer will get much deeper expertise in the nuances of whichever end of the spectrum they’re starting in. That depth will always be useful in a career. By generalising, the designer gets a ‘flavour’ of the 3 broad areas of design and so this may help them discover which parts of the end-to-end process they like or dislike. That’s also OK.

Sometimes, and depending on the person, we decide whether we want to be a generalist or specialist really early on. That’s also OK. And, at the time of writing, the world needs both and the world values both roughly the same. There is no right or wrong, but it’s about checking in with oneself every little while, often with someone a bit external to the day to day (like a coach, mentor or someone who has more experience than you), to get a sense of which one is feeling right at any given time.

Sometimes, we think we want to specialise, and try really hard to do so, and build up a whole career and identity around being that specialist, only to start to become curious about generalising. This is also OK. We change, sometimes often, sometimes never. The important thing is that we’re being mindful of it if it happens, embracing it, and then setting ourselves up to follow this new curiosity, no matter how late in life it comes.

The time horizons are 10 years, not next job

First and second jobs are really just a starting point, although they don’t feel like that at the time. As difficult as it is, I try to coach designers to think on a 10-year horizon – by the time I’m 30, what sort of design role do I have. And then, we work, job by job, carving a path to get there, learning and iterating along the way, just like any good product designer does on their own product. That vision may shift based on what someone learns in job one, or job two etc, and that’s OK. It’s about making a conscious choice, every step of the way.

Correcting a ‘wrong move’

No matter how much research or interviewing we do with a role, we never know the truth about the day-to-day of a job until we’re living it. We may have been told that this role we’ve just accepted will give us depth in research skills but we turn out working end-to-end as a generalist. We may get a shit manager or mentor, and more often than not, the role changes as we’re working in it.

But, the thing about wrong moves is that they’re very rarely ‘wrong’. Often what we think of as wrong is that they just didn’t meet our expectations or the goals we set out to achieve. (That happens in products all the time, by the way). When it comes to designing a design career, it helps to re-examine the day-to-day work occasionally to see what we *are* getting out of it, because, even the ‘worst’ jobs are good for something.

Levelling up to become a great Individual Contributor

Over time, usually 5-10 years, people have had enough experience to be really good at something – either a really good specialist or a really good generalist. At the time of writing, and with an average of people staying in a job for about 2-3 years, that normally equates to 3 or 4 jobs. Those jobs may be at the same workplace, or vastly different ones. Once again, the individual can choose, over time, to specialise or generalise in the domains they’re working in. I know some designers who are corporate enterprise specialists. I know designers who are expert designers for start-ups. Again, there’s no right or wrong.

As one gets more experienced, and the day-to-day ‘hard skills’ become more second nature, a space arrives in the brain to think about higher-order questions like what am I finding meaningful in my work. Sometimes single domains and industries ignite a spark and a motivation to focus, or, like me, some designers find that really boring and enjoy diving into completely new industries every few months.

It’s not until this point that we begin to talk about values (that thing that most coaches start with). With more experience, the cognitive load of learning the ‘hard skills’ has reduced, designers begin to think about why they’re doing what they’re doing and the impact they’re making in the broader world.

Some designers, at this stage, commit to being a very senior ‘individual contributor’ because they love being on the tools and making stuff, and work to position themselves in domains or industries that give them fulfilment because it aligns with their values. Other designers start to see value in mentoring and coaching other, more junior, designers and pick a track that most businesses call “Design management” which is really about designing teams that build products rather than designing the products themselves.

Building great teams that build great products

In the same way that some designers love ‘being on the tools’, others love what is essentially a different type of design challenge – designing teams.

As any designer will come to know: people are complex. They all have different needs and life experiences that mingle together to form a set of behaviours. Those behaviours, given the right space, purpose, and context, can amplify things for the better. And, just like a great individual contributor wants to understand how their users will use their product and service, great design managers want to understand the designers in their care: what are their strengths and how might they be best directed in the context of a design team, and in a cross-functional one.

There are no ‘personas’, but there are roles

People who build design teams know what’s needed to make a great product: in a simplified sense, it’s depth and consideration across research, interaction design, and visual design.

And so, the design manager’s challenge is to grow an individual’s skills in the way that it aligns with their personal career trajectory, whilst, at the same time, thinking about how all those individuals can work together to amplify each other’s work and, ultimately, make better products and services.

Design teams have roles – Researcher, Interaction Designer, UI Designer etc that need filling. But designers all come with infinite complexity around strengths, motivations, behaviours, and they change, iteratively, as the designer themselves grows. The best design managers I know are constantly and iteratively checking on the individuals and the relationships between them and their colleagues, to provide perspective and guidance on how to ensure everyone remains happy & well.

Transititioning from Individual Contributor to Design Manager

Sometimes, these transitions happen naturally, other times require a distinct job or role change. But, the advice I give to anyone trying it out is that it’s roughly a 2-year experiment. In my experience, career growth and tweaking team relationships takes time. It’s not something a person can switch-on overnight (although, it’s possible to see ‘progress’ by then). And, without oversimplifying things too much (it’s much more fluid than this in practice), these feel like a good guide:

  1. 6 months – understand the people within your stewardships as a design manager, and the environment they’re operating within. Individual strengths, career goals, ways of working, interests and curiosities, as well as things that are of no interest at all are all important on an individual level. Also understanding the team and organisation dynamics (the designer’s day-to-day ennvironmental context) is required to give yourself the best shot at creating happy and engaged teams (and great products as a result).
  2. 12 months – with regular 1:1s, pinups, critiques, and opportunities for small feedback cycles, you begin to see how your attention on people shifts individual and team dynamics. 12 months also typically gives a design manager a round of ‘performance reviews’ – both for their people and for themselves – and provides a formal way of understanding growth and goals.
  3. 6 months – a chance to explore a bit more depth in the ‘craft’ of people leadership and team stewardship and see if the wins and challenges of designing people and teams gives you the same feeling as shipping a cracking product that users love.

On ‘letting go of the tools’ as a design manager

Because of the almost infinitely complex and ever-changing nature of people, teams, and organisations, there is essentially infinite depth and iteration available to designers who choose the ‘manager’ track as their ‘craft’.

There’s a misconception that individual contributors reach a ‘ceiling’ in their craft earlier than a manager but, just as people are infinitely complex, technology and the way humans use and interact with it is also infinitely changing. And so, that ‘ceiling’ that people talk about with Individual Contributors isn’t one of problem depth or complexity, it’s simply money. The ‘market’ tends to value the problems that great people management and leadership solves more highly than the problems that an individual contributor solves. Good managers are also more scarce than good individual contributors and greater scarcity equals greater perceived value.

In my experience, a really senior individual contributor will always get value from picking up and understanding some of the skills of good design management – even if it’s not a forever career path. It isn’t necessarily about ‘letting go of the tools’ forever, it’s simply about broadening one’s skillset from research, interaction, and UI skills, to those things that organisations like to call soft skills.

And finally, on soft skills

In learning the craft of design – research, interaction design, and visual design, there’s a parallel track of skills that designers begin to build up from day 1 that often go unnoticed. They are used and developed by ‘doing’ the work. E.g. Presenting research to team members or other stakeholders. Taking insights from research and turning them into a product or service. Iterating through various UI design options to get to one that’s the ‘right one’. This parallel track of skills is what organisations and people tend to refer to as soft skills. Soft, because they go unnoticed and happen, often, without particular focus or structure. However, it’s exactly because we don’t cultivate them with intention, focus, and structure, that they can be quite difficult to learn.

The best thing any early-career designer can do is simply be familiar with them – giving them a label can often be just-enough definition for us to help pay attention to them as we focus on the more concrete processes and frameworks that often define our craft.

Empathy, or the ability to understand and share the feelings of others is critical. Without this, we’re unable to understand how painful or joyful something is for someone else. Empathy allows us to design the most positive interaction with a product or a business.

Communication is a no-brainer and whilst not specific to a designer, it’s what a designer does every single day. They need to communicate with users while doing research, with the team in building software or anyone who has an interest in the product and who need ideas conveyed clearly and concisely.

Active listening is part and parcel of being a good communicator. Asking the right questions at the right time can only come from truly concentrating, understanding and responding to others. It’s much harder to do well than you might think.

Self-awareness. A designer needs to know their own strengths and weaknesses, biases and preferences. Only by knowing these well can they perform effective and truthful research and devise solutions that solve problems in the way users need them to be solved. Crucially, this is often different to the way the designer or others in the team would personally like them to be solved.

Problem-solving is an obvious skill for a designer to have but nonetheless, can be difficult to hone. Yes, there are tools and techniques to learn how to problem-solve more effectively and efficiently but the motivation to solve it well is something a little harder to find. On top of this, designers are pragmatic and they use exceptional critical thinking. Nothing is perfect, but it doesn’t mean we can’t aim to be.

Imagination is the engine we use for coming up with new and innovative solutions to problems. The ability to create something from scratch that never existed before is unique and, to be honest, a bit magical. Designers are innately curious folk. They’re always reading, learning, watching and asking why. It’s this natural inquisitiveness that I think gives designers their great imaginations.

Lateral thinking is tightly coupled with imagination. The ability to view a problem from multiple angles, sometimes unusual ones, lays the foundation for a great creative thinker. Often, it’s the ability to borrow from different contexts and one’s own life experiences that strengthen this in a person. Whether you have experience or not, involving other humans will always produce more ‘lateral’ results.

Story-telling is innately human. It goes to the core of what we are as a species — but to tell a good one requires practice. Designers can spin a good yarn and it’s important. Not everyone in the team will get the chance to talk to users and so it’s up to designers to convey what they hear and learn from users in a way that’s compelling. Designers need to help the entire team build the same level of empathy for their product’s users so that everyone knows the problems they’re trying to solve, and why it’s important to solve them.

Humility. Let’s face it, no one knows everything. Designers are intimately familiar with the design process and the methods and tools they use to do great work, but at the end of the day, they’re human too. They make mistakes, get tired, under sleep and over-eat too. They might misread a user’s expression, or over-emphasise things occasionally. But, they’re also lifelong learners. They use the power of the team to reduce the risk of getting things wrong. After all, great products aren’t built by just one person and a designer is always part of a team.

Nothing is forever: constellations not paths

In my experience, designers seem persistently concerned with ‘the path’ – If I make this decision then there’s no going back. i.e. Individual Contributor or Design Manager. Visual Design or Research. Product Design or Service Design. But, what I’ve found is that design careers aren’t a set of binary decisions.

As people change, so do their interests, curiosities, and abilities. People get bored, or they discover new things. Technology and teams change and new opportunities for exploration and discovery emerge. Designers are, in general, curious folk. And Design, in general, is almost infinitely broad in its scope. As a professor of Design once said to me, Design is Everything.

And so, when I think about ways to define or design a design career, my conceptual thinking brain lands at constellations, not paths. The idea that designers can hope from one knowledge star to another, following their interesting and curiosities as they go. And, as long we remain reflective on our experiences, self-aware enough to know what we enjoy and what we might like to do next, a design career seems to be one of lifelong pursuit.

The skills a designer accumulates in research, interaction design, visual design, people leadership, and soft skills, (at any specialisation) are applicable in almost any capacity across any human endeavour – whether that’s building physical or digital products, or addressing some of the world’s most difficult and complex systemic challenges.

But, if the ‘user’ in the story of designing our own career is ourselves, it makes perfect sense to have a peer or two to check-in with – someone to give an outsider’s perspective, challenge our own biases, and help us think through our own tangle of thoughts to work out what we truly want in every new step in a career, or in the next job. That the direction we set at the start of our careers is still what we want (or now don’t want).

After all, how we spend our time is how we spend our lives and so isn’t it worth approaching it all with self-reflection, curiosity and optimism as we do with any design problem we work on, at whatever scale? To trust in the process of ‘build, measure, learn’ to iterate toward a career we’re enjoying and happy with? It’s what I’ve done and it’s worked out for me. I’m hoping that amongst all the nuances and differences amongst us, there’s a common thread that connects us through our practice.

More importantly, in my most recent iteration, is that I’ve discovered that I really enjoy helping others work it out for themselves, too. And here I am. I don’t know what’s next, but hopefully, it centres around helping others work out their path, too. Chances are I’ll learn something from their stories along the way.

Free 1:1 advice available

If there are designers out there, of any level or experience, with whom this has resonated, and they are struggling to find decent leadership in whatever role they’re in at the moment, I am always happy to chat – for free – about their design career. I can offer frameworks and tools to help designers think about their design careers so that they feel comfortable and confident in honing their craft in whatever way suits them. Sometimes having someone listen is all that’s needed. Please contact me if you’d like some time to chat.

February 2022

Is digital real?

I was recently introduced to a business that surprised me – An NFT marketplace for virtual real estate in the Metaverse. Yes, you read that right. Try reading it again and don’t feel bad if that makes no sense to you. It didn’t to me either, and I’m not here to critique whether that business is a good or bad thing. But, what’s interesting to me is that these businesses exist, and will grow, at least, in the near future, which points to a bigger, more concerning problem than buying virtual real estate – whether we’re losing touch with what’s real.

A quick, non-scientific history of markets and abstraction

As humans, I know we’ve always traded between one another. Your sharp stone for my bit of reed. I’ll give you my fish if you give me your coconut. You can take shelter in my cave if you let me use your spear tomorrow.

In small, connected communities, we depended on one another to survive in this way for a very long time. The butcher, baker, candlestick maker all provide specialised goods and services to one another that the other can’t provide for themselves. You raise goats, I’ll grow spinach. I’ll give you some of mine and you can give me some of yours.

Living in this way, a single person didn’t have to do everything. But, over time, factors like scarcity and abundance come into play that change the value of the things that people make. If I’ve got too much spinach, or you’ve got only 1 goat left, things aren’t equal anymore. Now, my one bunch of spinach isn’t as valuable as your last remaining goat – we need something to even things out.

One of humanity’s greatest achievements is the ability to abstract things and so, to help us handle scarcity and abundance, we invented money. Money not only addresses the scarcity and abundance problem, but also the trading problem. Not everyone needs my spinach. So instead of trying to offload my spinach to a butcher so I can get meat, or a baker so I can get bread, I can give all of my spinach to someone who needs it, then use the money I get from that to buy the things I need (and vice versa).

Creating infinity

And so, for many years, we used money to help trade real things – food, water, energy, land. It worked pretty well. The ‘problem’ with these things is that they are, by their very nature, finite. We’ve done an excellent job in converting energy from one form to another over the years, and trading them with one another, enabled by money.

But, now, there are a lot of us. Many of us don’t have enough. And there are those who have enough but want more. But with real stuff – food, energy, water, land – there’s only so much to go around and there seems to be a growing feeling that we’re reaching limits (i.e peak oil). But, we can, however, make an infinite amount (within limits imposed by global economic agreements) of our cornerstone abstraction – money. But, can we create infinite value?

What’s valuable… now?

The interesting thing about trading anything is that it has always relied on agreements. For something to be worth anything, all we need to do is agree, at scale, on its worth. Are goats more valuable than spinach? Is my vintage car more valuable than your new electric vehicle? Is a lawyer’s time more valuable than a nurse’s? That agreement works on an individual level, but also at a macro level.

Digital takes what’s real – land, water, energy – and converts it into abstractions of value that, for some reason, we seem to value more than the finite resources that are used to make it.

For example, if I’m thirsty, I’ll pay more for a bottle of water than if I’m not thirsty. But, at a macro level, how important is water to a country with plenty of it versus one that has little of it? The scale of trading influences the overall value of something and these numbers can (and are often) influenced by ‘market forces’, aka what we agree is valuable at any given moment driven by scarcity and abundance.

It feels intuitively easier to value things that are genuinely finite – the food, water, energy, land thing. It feels much less intuitive to value things who’s scarcity can be manufactured at the push of a button.

For example, let’s take land. There is only so much land. Our world is finite. In fact, with climate change, some say our land mass is shrinking. And so, with only so many parcels to divvy out in various sizes and locations, the fluctuations in the value of those things are fairly stable. It’s a similar story with the other finite resources – food, energy, water – and they come with another unique property in that they sustain life (and, like with any trade, we all agree that it’s true).

The virtual world, however, is different. New ‘land’ in the virtual world is an abstraction of real land – an idea. It tries to use our mental model of the scarcity of the real world to manufacture scarcity in the digital one. In digital, whole new worlds can spin up and exist in an instant. And so because digital land can’t be scarce (unless we all agree to limit it which is unlikely), the value of it isn’t driven by anything but the agreement – if we agree it’s worth this, then it is, for now. But later, others may disagree with that value, in which case, it’s no longer worth what it used to be worth. And, meanwhile, new ‘digital land’ is released daily. If someone can control scarcity (or lack of) they can also control value. And, it’s not the poor who have the ability to spin up whole new data centres of virtual land.

But why are we trying to manufacture other domains of value outside of the finite? It seems to me that it could be for two reasons:

  1. People stand to win during moments of mutual agreement. (And people will also lose).
  2. We’re losing our ability to trade real stuff – finite stuff – because, it’s, well, finite, and we’re running out of it.

It turns out that real scarcity, in many ways, is the ultimate arbiter of the ability of the value of something to fluctuate over time. Without real scarcity, we’re betting on agreements – an abstraction that goes, potentially, beyond money.

Scarcity and reality

The reason an NFT marketplace to trade virtual real estate feels weird is because it is without scarcity. True scarcity. The world can go on without virtual real estate. But, we can’t survive without water, energy, food, land. And, in our excitement of extracting ever more abstract value from our market, we are, at the same time, using up those finite resources to support it. But when the water, energy, food and land in the real world is gone, literally gone, there will also be no NFT marketplace for virtual real estate. Once everything real is gone, the virtual is too, no matter how much we want to convince ourselves that the digital world is real.

Remembering what’s truly valuable

In Ministry for the Future by Kim Stanley Robinson, residents in the near future have a saying – what’s good is good for the land. And, if I have the choice to invest my finite time and energy in creating new abstractions of value so that humans can inch slightly ahead of one another, or invest it in trying to make the best use of our truly scarce resources, I’ll choose the second, every time. Because what’s good is good for the land. There’s only so much of it and the only thing that’ll ensure it survives so that it can support our species and the remaining biodiversity we need to continue existing is that we all agree on its value.

The process of abstraction, seems to me, a distraction. A story we’re telling ourselves to keep the money flowing, to keep the ‘economy’ running. We’re really good at it. The risk, of course, is that we end up believing it, and drive those truly scarce resources to such low levels that nothing survives. Literally. So, perhaps what we should be working on is understanding how those truly finite resources support our continuing abstraction of value and, I don’t know, maybe don’t do that anymore? I suspect that sounds easier than it will be to do. But, what’s good is for the land, so maybe it’s worth starting there.

January 2022

Software wants to disrupt everything but itself

Software has some powerful attributes, it’s part of the reason I love working in the industry – just one person and a computer can build and share a tool with billions of people in a matter of moments. The optimistic view of this is that its enabling – software can have profound positive effects on the world, almost instantly. It can empower the disempowered, improve the quality of life for people who would otherwise not have that opportunity, it can unlock access, equality, and justice.

Of course, it could do the opposite, too.

And so we find ourselves stuck in this game. Software companies, driven by their commitment to shareholders to generate profit, release the tools at scale that help them deliver on their shareholder commitments. They ‘push us forward’. They ‘help us progress’ as culture. They ‘disrupt the status quo’ because the old way isn’t necessarily the good way, in fact, the assumption is that it most likely isn’t the good way.

So the good software spreads. It creates opportunity as promised. It helps more people participate in the economy as promised. It drives up profits as promised. It unlocks value that, until now, was impossible to activate. It changes the culture through saturation and suddenly those affected can’t imagine a world without it.

As the culture changes, software can respond quickly. In fact, that’s the mantra – move fast and break things. Ship it. Measure the effect. Sense what’s needed next, then make that. The build – measure – learn loop that’s so prolific in software development means that, in a matter of months, several iterations of the tool can be refined, honed and released to meet the needs and demands of the consumers who are using the tool. Often the first thing that was shipped morphs, through feedback, to solve a vastly different problem for a vastly different audience, but who’s keeping track of that when all that matters is the promise made to the shareholders – people will pay for this problem to go away, so now we’re doing that.

Have you ever seen a toddler learn to walk? That’s software. One step after another, learning as they go, gathering balance and momentum quickly. The destination is uncertain, in fact, in most cases, completely unknown, but the mantra is forward without falling. Just keep going. That is, until the parent pulls on the harness and re-directs that energy somewhere else – enter regulation.

Iterative regulation isn’t possible

Everyone knows that regulation, compared to software, is slow. But that’s because laws can’t work like software. In fact, it’s culturally agreed that they shouldn’t. Laws are deliberate, long-lasting, and need clear definitions and boundaries. It would be completely impractical to change the definition of Murder or Grand Theft Auto every few months. Good law-making requires deliberation, long-term analysis, science, political engagement. Issues of equity & justice are central to law-making which makes that deep consideration necessary – once a law is passed, people’s lives and choices change. There is no room for misinterpretation and so the language used in law does its best to articulate its meaning as clearly as possible.

But, like with most things, it’s never perfect. Ask two lawyers to interpret the same paragraph of law and both with come away with opposing viewpoints – the defendant and the prosecutor. The laws we write cannot be divorced from the cultural and political moments in which they’re written. And, similarly, their interpretation, many many years later are also influenced by the moment in which they’re interpreted.

Technological Social Responsibility over Corporate Governance

If it’s impossible for regulation to be iterative, and software, by its nature, is, then who is better placed to ensure that some of the role of regulation be woven into the process of designing, building and distributing software at scale? Maybe software companies could play a role in ensuring that issues of equity and justice, those things normally considered central to law-making, are considered alongside profits and innovation. Maybe the additional constraint will unlock even further innovation (as constraint often does) – not a world we know is possible, but one we can’t even imagine until we do it.

It’s the funny thing about software people – so quick to criticise the government, public policy, ‘slow-moving’ institutions. The culture is one of disruption; cause a ruckus then deal with the consequences a few years down the track when regulation inevitably (and often poorly) catches up. Uber, AirBnB, and now Buy Now Pay Later software are just a few examples of software’s inherent nature of disruption – the hare running ahead of the tortoise.

If I looked at regulation like a software person I’d see an opportunity. Not an opportunity to ‘get around regulation’ as is the default thinking I’ve experienced in the industry, but as a possibility of creating a fairer, more ethical, more just world more quickly. And, to be clear, I’m not advocating for corporate governance, but rather Technological Social Responsibility (TSR) – a consideration that those who can provide tools to billions of people overnight not only should consider their longer-term socio-political implications, but must.

What needs to change in software?

Like addressing any deep cultural assumption – e.g. women shouldn’t vote, gay people can’t get married, segregation – the idea within software cultures that software companies could do a better job at uncovering and planning for longer-term detrimental effects of their software is, at the moment, a radical idea.

A drawing of The Overton Window
The Overton Window applies in software cultures, too.

Whenever I float the idea of TSR to software companies it’s met with immediate, knee-jerk responses like, “That’s the role of regulation!” Even though we all know, implicitly, that what that response really means is, “We want unfettered access to scale and impact because we see a business opportunity. And, if our hunch is right, we’ll be well and truly scaled and profitable before regulation has time to catch up. By that time, we’ll have a significant amount of power and money that will help us shape or defend ourselves against it when the time comes.” It seems that software companies want to disrupt everything but themselves.

But, this is the thing I don’t understand about software people: we are so good at thinking about complex domains and systems. Engineers are literally trained to think through risky scenarios and draw out all the thing that could go wrong in their code. We think through edge cases and what-ifs all the time. The only difference is scope –  it’s just that we’re not applying this incredible analytical and logical skill to the broader cultural implications of giving tools to millions of people in the space of days, instead we’re checking it against the effect on revenue, monthly active users, retention, acquisition, and engagement.

It’s not as if tools to help us think through these complex implications don’t exist. Regulation has a version of them already and whilst they aren’t perfect, and they probably take longer than we would like, they’re better than nothing. They also present an opportunity for incredible software thinkers to ‘disrupt’ that and design or uncover ways to make that process better.

And, as the world begins to realise the ecological impact of software at scale, new and interesting tools are emerging all the time to help us think more long-term, beyond tomorrow, at an intergenerational and global ecological level.

So, if it’s not for lack of access or availability to these tools, it’s something else. But, like with most things, the barriers to change are based on a deeply held, almost invisible assumption – that we can’t have social responsibility and profit, scale, & power. We also used to think that staying at strangers’ houses, getting a ride from someone you’ve never met, or transferring electronic money from person to person were things we couldn’t have – but here we are.

Reframing the idea of legacy software

Unlike landfill, the legacy that software leaves behind doesn’t visibly pile up (although there is the ecological cost of the raw materials that require us to live in this digital age). Software changes the very thing we cannot always see immediately and struggle to anticipate unless we put a concerted effort towards it – the relationships between things: humans and things, humans and humans, humans and animals, humans and our planet. When those relationships shift, it’s far easier to label it an abstract phrase like ‘the evolving cultural landscape’ and do what software and toolmakers have always done – create something new to deal with the problem that the first thing created. The cycle only speeds up – the more tools, the more unanticipated problems, the more tools we need to handle them.

Software leaves behind the things we struggle the most to measure – the relationships between us

Design, as a practice, seems uniquely placed to help engineers, product managers, and businesses visualise and understand the relationships between things. It’s something we take for granted – a skill that seems so obvious that we struggle to understand why others aren’t doing it (just like ‘listening to the customer’ was about 10 years ago). As Paul Rand so famously said – Design, in its truest form, is nothing but understanding relationships.

And so what if legacy software was thought about like a city or a public garden? Infrastructure with a 20-30 year impact – or even longer? What if it was normal for the planners & builders of software tools and systems to know that they’re working on something that they may not live to see others enjoy but do it anyway? Stuff that doesn’t just improve their lives or the lives of those with access to the internet, but for generations that don’t yet exist?

Sure, the technology that we build these systems on will change, evolve, and unlock new opportunities but perhaps the relationships that software creates between humans and the planet could persist or evolve with more intention?

Yes, there are positive and negatives to any tool that’s provided to the masses, any technology that’s created to solve one problem often creates another. But, perhaps if we were more concerned with what us software makers leave behind in the long term rather than the short-term thinking that’s so pervasive in software-building cultures, we’d start to shift the needle for how software thinkers begin to plan the way they disrupt or change the culture. To borrow a little thinking from Regulators, those slow-moving curmudgeonly lawmakers, we may find ourselves iterating our way to a fairer, more equitable world and leaving less people behind as we go, even as the culture evolves.

October 2021

The deepening of intergenerational digital and social exclusion

Almost every 30-something person I know has a similar story – a moment where their parents or grandparents have tried to achieve something ‘simple’ online (renew a license, download a government app, order a taxi) only to have failed miserably leaving everyone, especially that parent or grandparent incredibly frustrated. From this point, it’s not a difficult path to statements like, “I’m a tech luddite” or “I’m terrible at technology”. From there, the easiest path is the one of least resistance – to opt-out of technology.

It’s not technology’s fault, it’s ours

I don’t know a single software designer, who, at some point in their career, hasn’t done the following:

A new feature is being designed. It’s, in the scheme of the project, a relatively minor one for an application that already exists. The risk of getting it wrong *feels* low, and there’s a huge backlog of work coming up on the horizon that the team is panicking about. It would take 5 days of work to design, test, re-design this particular feature with the users of the application to make sure we get it right. It would take 2 hours to do a quick scan of the world’s biggest software companies (Google, Facebook, Instagram, Netflix) to see how they solve the problem and replicate that in as much as it makes sense to do so. Sure enough, we pick the second option – just this once.

The decision to trade-off time-to-release with usability feels, in the moment, like a pretty low impact one. We use everything in our designer-y brain to justify that decision:

  • The team is under a lot of time pressure
  • It’s just a small feature
  • Chances are, our users are people who also use Google, Facebook, Netflix etc, so we’ll leverage that familiarity to de-risk it
  • Our small team doesn’t have the budget to test everything but we know Google, Facebook, Netflix etc do heaps of testing, so we can trust that by proxy

And sure, on paper, this seems pretty reasonable. Maybe even off paper, when the feature is shipped, the customer service team isn’t inundated with support requests and so, maybe it worked? Maybe if it worked once, it can work again? The next time those factors of time, size of feature, and risk of getting something wrong is true? What if we did it again, and again, and again? Each designer, in each different team. What happens in a year, or two years, or three years down the track?

The slow proliferation unusability (and language)

A knowledge of how anything works is cumulative. If the first strawberry we ever taste is sweet, we’ll assume all strawberries are sweet until you taste one that isn’t. After that, we realise that some strawberries are sweet, and some are not. Once we get on a train of knowledge, we build it slowly over time. The same works with understanding how to interact with digital interfaces.

Example: Buttons

Buttons, in the real world, look something like this:

A close up of vintage car radio buttons
A close up of vintage car radio buttons
Top: The original radio buttons via UX Planet | Bottom: A more subtle and ‘modern’ take on buttons via 7428

The physical characteristics of a push button can be described like this:

  1. They protrude from the surface.
  2. Some are concave or convex to communicate, all be it subtly, that your finger belongs there – that you are required to push.

Buttons were inherently physical objects. So, when digital interfaces came along, designers used the familiarity with real-world objects to teach people the function of these graphical elements. They started off looking a bit like this:

Digital buttons that have convex and concave shapes so they look like buttons that can be pressed even though they're digital
Skeuomorphic buttons via Jon Kantner

Then, over time, the “Medium” of digital evolved. Partly through fashion and a need to differentiate in the market, partly through a requirement to ship more quickly, skeuomorphism started to seem ‘dated’ and big companies like Apple and Google (the ones we rely on as proxy for ‘good, tested design’) decided that we would enter an era of an increasingly minimal aesthetic which ended up in flat design.

Soon enough, lead by the large companies, ‘buttons’ started to look like this:

An example of two 'buttons' that are actually flat rounded shapes
The evolution of buttons took on a non-skeumorphic look which no longer gives clear affordance via Dribbble

And of course, our language didn’t change – we kept calling them ‘buttons’, but their physical characteristics – their affordances – slowly evolved. And, unless you evolved with them, these squircles and circles above don’t look anything like a ‘button’ anymore.

The problem is, we the designers are evolving with and making up our own affordances and forgetting that ‘regular’ people are not.

When I tell my dad to press ‘the upload button’, he doesn’t see it as a button. It’s just a blue shape with rounded edges or a blue circle with an up arrow in it. I can’t use the word ‘button’ when I’m coaching him over the phone through an interface he doesn’t understand because, well, these objects look nothing like actual buttons that still exist in the real world. And I haven’t even described the issue we’ve got with concepts like “Upload” and “The Cloud” here.

Our visual language has evolved, our language hasn’t

The evolution of the visual signals we’re using to denote functionality in the digital world isn’t just constrained to buttons. It’s happening everywhere.

Take ‘tabs’ for example – the ones we use in internet browsers. The word ‘tabs’ comes from the appropriation of the physical tabs we use to separate sections of a document, like this:

A photo of dividing tabs in a physical folder
The ‘original’ tabs haven’t changed their function over many generations. Image via Printwise

So, in the early days of UI design, designers quite rightly took that metaphor, guided by the affordances it gave us via their shape and made the early internet browser:

A photo of dividing tabs in a physical folder
The same notched shape of ‘tabs’ ported over to the digital world with ease, and persists today. Image via Printwise

This metaphor has persisted surprisingly skeuomorphically over the years. In most desktop internet browsers, the use of tabs is still the ‘norm’, and we still say things like, “I have too many tabs open.”

What’s interesting though, is what we see on mobile. This screenshot is of the latest Chrome browser (yes, another Google product, the ones we rely on for direction).

I can see the call-to-action (which also looks nothing like button), “New Tab”, in the top-left corner of the screen. But, I don’t see anything here that looks anything like a tab. Maybe they’re windows? Squares? Perhaps they’re buttons, now?

This would be fine if you’ve used a desktop internet browser before, but what if you haven’t? What if you’re in the increasing number of people on the planet who have only ever used a mobile device? What does “tab” mean to you now?

Once you start looking for it, it’s everywhere. And, you might think, “what’s the big deal? People will learn it once they use it and then they can evolve with it, just like the rest of us?” Well, to them I say this:

Imagine if you felt like a banana and you asked the grocer to give you one. Instead, they gave you an orange. And then you said, “that’s not a banana, I want a banana”. And then they gave you an apple, instead. In today’s world, we’d give the grocer a 1-star review and move on. Eventually, they’d go out of business and we’d say, “Well, that’s good, they didn’t know what they were talking about, anyway.”

This isn’t about anti-evolution, it’s about exclusion.

I’m not saying our craft shouldn’t evolve; that we should continue to replicate the physical world or that we should be limited by what physical manufacturing is capable of in any particular decade. But, what we’ve done, slowly and steadily, drip-by-drip, is made it very difficult for anyone who isn’t us to use and access services that are vital for their wellbeing and existence in society.

UI isn’t the only problem with software – it’s also how we make it and model it.

Here are just a few datapoints from Digital Inclusion Index Australia:

  • 1.25 million Australian households without internet access at home in 2016-17 (14%)
  • 79.1% of people educated to year 12 or below use the internet, compared with 96.7% of people with a tertiary qualification
  • More than two thirds of people who are homeless had difficulty paying their mobile phone bill in the last 12 months

This means it’s about how perfomant we can make our services so that those with limited bandwidth have access. It goes beyond ‘AA accessibility’ tickboxing because not recognising a button as a button, or a tab as a tab isn’t in the WCAG guidelines – it’s about human-ness.

More than 1.3 million Australians with disability did not access the internet in the last 3 months. One quarter of these people report lack of confidence and knowledge as a reason for not accessing the internet.

It won’t happen to me

It’s easy to think that because we work in digital and we’re evolving with the patterns, we’ll be OK. That once we’re on the train, we’ll stay there. It’s our job, after all, to understand what’s emerging in the evolution of our visual language online – how we implicitly communicate form and function to our users as well as how things actually work. I’ve often remarked to people that working in software is my insurance policy against being excluded as I age. But, now at 37, I can already feel it happening.

It happens slowly, and that’s the problem – before you know it, you’re alone.

I have no idea what tools ‘the kids’ are using these days (Gen Z and younger). I mean, I know TikTok exists (our next big ‘global app that everyone uses’), but I don’t use it. But, because ‘everyone’ does, TikTok’s designers have incredible influence and power in establishing interaction design patterns that users will learn and future products will leverage as they evolve their own language. If I miss the TikTok train, how much further do I slip behind? As the saying goes, first we make the tools, then the tools make us.

It turns out that Gen Y and X are one of the first two generations in the history of humanity who are using decidedly different tools to that of the generations before. It used to be that our parents and grandparents taught us how to do important things in life, now it’s the reverse – we spend hours on the phone helping them link their COVID-19 Vaccination Certificate to their ServicesNSW App so they can access important services that are vital to their ongoing wellbeing and inclusion in the community.

We have to fix this before we get there

It’s easy to criticise, but maybe that’s what’s needed here. As a set of professionals who have immense power to shape the way humans and computers (and humans and humans) interact with one another, by all accounts, we’re doing a pretty shit job. Those tiny decisions we make every day have cumulative effects that are creating a more divided and unequal society by the day.

We could keep going this way. We could slowly but surely be the ones who, towards the end of their lives, can no longer access important services or interact with businesses like cafes, cinemas, and restaurants because we can’t figure out how to order food or movies anymore. Maybe we’re OK with that? But also, maybe we’re not.

What needs to change? It’s easy to be overwhelmed by everything. To think that one person can’t make the difference. But, if nothing different happens, then nothing will change so here’s a few simple suggestions that I’ve applied to my own life. I offer them to others:

Prioritise research and testing in our day-to-day work.

Maybe it’s not OK anymore to ship that small innocuous feature without doing a little bit of testing on it first. Maybe it’s no longer OK to rely on what Google, Netflix, Facebook, or TikTok are doing to achieve a similar function to the one you’re trying to design. It’s easier than ever to get access to millions of people, around the world, to confirm or confront your internal biases in the design solutions you put forward within your team.

Recognise that there is no ‘everyone’.

“Everyone” does not use Google. “Everyone” does not use TikTok. “Everyone” does not use Netflix. We can’t assume that just because Google design something a particular way we can lift it and the users we design for will automatically ‘get it’. I’m not saying that we can’t use familiarity at all, we should just make sure that it works for who we’re designing for in our day-to-day as well. Maybe your users don’t use Google, TikTok, or Netflix.

Callout who we’re excluding.

Because every business is different, every ‘market’ is different. It’s easy to say, “Our target market is 35-44-year-old women with children.” That gives focus to the team, sure. But also calling out who gets left out in this scenario at least surfaces who you’re leaving behind. Are we talking inner-city women with children who have access to good bandwidth and widespread mobile coverage? Yes? No? Being clear on the boundaries of who you’re designing for means you’ll have clear boundaries of who you’re excluding – then we can ask the next most important question, Is that OK? Who might we harm?

Don’t blame parents (grandparents or carers), blame designers.

Our parents and grandparents taught us how to interact with the world so, maybe now, it’s time for us to help them do the same until Design is fixed. Recognising that it’s not the older generations lack of ability or fault that ‘they don’t get technology’ is the first step. It’s not that at all, it’s just that for 2+ generations, we’ve left them out of the conversation about how they understand the world and how they want us to shape it for them. Maybe, it’s time we start listening. For their sake, and for ours because it won’t be long until we’re them and we’re calling our loved ones to ask how on Earth we can watch a movie we want to watch these days.

It’s not too late

I have faith that we can turn this around. The designers I know don’t intentionally set out to the do the wrong thing, but as E.F Schumacher states so profoundly in his 1973 book, “Small is beautiful: Economics as If People Mattered”:

Unintentional Neocolonialism results from the mere drift of things, supported by the best intentions.

And so, we have to stop this drift. This isn’t just about computers, the internet or even technology. It is about using technology as a channel to improve skills, to enhance quality of life, to drive education and to promote economic wellbeing across all elements of society. To be digitally inclusive is really about social inclusion. The opposite is true, too.

September 2021

The misleading connection between art and design

A variation of this article was first posted on cogent.co on May 1, 2019.

There is very little connection between art and human-centred design. I’ve spent my life trying to prove to people that having good visual communication skills and deep critical thinking aren’t mutually exclusive (in fact, if anything they reinforce one another). And, even though I’ve been trying to convince people of this for almost 20 years, I feel like I haven’t made much progress. On the internet, I still have to separate my career as a picture book author/illustrator from my ‘professional design career’. If I don’t, I’m pigeon-holed as a ‘visual’ person or, even worse, ‘a creative’ – that word that people associate with ‘magic.’

Designers don’t make magic

I’ll be honest, being ‘a creative’ sounds pretty cool. I used to think I was one too. It’s sort of arty. It implies a resistance against the status quo. It’s an easy way to stand out from the crowd and make one’s self feel pretty special. Like you’re one of the chosen ones with this natural talent at making things look great and breaking conventions while doing it. More importantly, calling one’s self ‘a creative’ bestows an enigmatic quality. It’s really hard for people who aren’t ‘creatives’ to understand what we do or how we do it. This “Wizard of Oz” effect makes us feel great.

But, the reality is that designers aren’t special. We’re not enigmas. Our beards, or flannel shirts, or dark-rimmed glasses and monochrome wardrobe might convince people who don’t know otherwise but largely, that’s been unhelpful. Designers, well, good ones anyway, use logic, deduction, collaboration, and scientific methods to produce great design work. That’s it. It may feel like magic when we get it right, but that’s only because critical thinking is a designer’s superpower and not everyone has that.

How did design and art get so confused?

To understand this, we need to have a quick look at history and how design, as a practice, evolved. Before computers there were these people called ‘commercial artists’. They typically had fine arts educations and so you (and anyone else) could call them artists, but even that’s a stretch. They used art materials like pastels, watercolour, charcoal, oil and acrylic paint, and they went to art school but even then, they didn’t create “art”. But, for argument’s sake, let’s consider them the closest thing to artists that designers have ever come. Anyway, companies would pay them to create images (not art) to advertise their businesses.

Now, we fast forward through time. Technology, as it always does and always will, changed the way commercial artists did their work. It made them go a couple of ways. Some picked up a camera and became photographers as cameras became a cheap and easy way to make images. They walked around and shot images that were used in company ads. Technically, it was still image-making so it made sense. They’d send their photos off to the ‘art department’ where they would be included with a bunch of other things to make the ‘final artwork’. The other half said, “Hey, the photography thing is pretty boring, realism is overrated, I actually like composing the thing that goes to print. I like things like fonts, and colour, and layout and shape and yeah, sure, I’ll include a photo or two as well when it gets supplied”. Sound familiar? Well, that thing became Graphic Design.

And so this discipline of Graphic Design continued to evolve and no one bothered to change the “Art Department” to anything else even though, by now, everyone is definitely not creating art. It started off using manual techniques like cutting paper into various shapes and positioning things on a page to compose the final piece (see Paul Rand). And then computers came along. It wasn’t long before a graphic designer didn’t have to draw all those letters anymore, they had things like digital typefaces (thanks Emigre) to play with and sometimes destroy (like David Carson did). And then, a bit after that, design software evolved such that they no longer had to cut pieces of paper and arrange them manually on a sheet of paper at all, the whole thing was produced in a computer. Still, no one bothered to change what they called the final product. It was easy to keep calling it “Artwork”, “Final Art” etc. As far as designers were concerned, they were still ‘making art’.

And then, the internet came.

Remember when the internet arrived? Those glorious days when web “pages” started to be a thing? Well, it was really new and no one knew what the hell they were doing but graphic designers were probably best-positioned to start trying to ‘help make the internet a more attractive place’. And largely, they did.

Early internet years meant the skills that graphic designers acquired in their careers were largely transferable to this 2D digital environment. Sure, there were button clicks and ‘interactivity’ to deal with but the general theory of visual perception still held true. Visual hierarchy, contrast and the general principles of Gestalt theory of perception served us well.

But, to make software easy for graphic designers to understand, companies like Adobe still used “art” metaphors like slicing and cropping and so on. The output of the process was still called “Artwork”. That’s right, artwork, 60 years on. Graphic designers spent years making ‘online brochures’ and slight variations thereof and here we were calling it “Art”.

The limits of my language are the limits of my world – Wittgenstein

Now, most recently, the internet has shifted from being a bunch of brochures to become pretty damn useful. We’re using it for way more than ‘brochure’ sites. We’re building tools and services. People need these services to function in society now. Banking is just one example. But, here’s where the real problem lies. Because of the events I’ve just described, graphic designers got their hands on the internet first. Because they were there first, they evolved “with” the internet. They learned, through trial and error, what it meant to specify clear interaction paradigms that they thought people knew how to use. I was one of those people. And you know what? We’re still here.

But, if we were to be honest with ourselves for a minute, most of us know we’re not artists now; at least we’ve come that far. We call ourselves “Interaction Designers” or “UXers” or whatever term we keep making up to tell ourselves we’re more interesting, serious, and you know, people should pay us more.

But a job title isn’t enough. No matter what designers end up calling themselves, there is still an intrinsic link in the non-designers mind between art and design. Labels are great, but they aren’t nuanced enough. By helping non-designers understand what we actually do, day-to-day, I hope that I can continue my somewhat hopeless cause – to separate art and design.

What does a designer actually do?

The problem with being part of a group with good visual-communication skills is that, often, we can struggle with words. We have the rich visual vocabulary and, sometimes, don’t spend enough time writing down and explaining all that we do – the stuff that’s intuitive to us is often the most difficult to explain. So, here, I often a model for how think about Design.

Research

Research is an important part of the design process and underpins the quality of decision-making throughout the life of the business and product. The activities that a designer might engage in during this phase are:

  • Identifying questions that the team needs to answer
  • Categorising the demographics of users who will likely use your product
  • Constructing and planning activities like interviews, focus groups & product testing to answer key questions and solve key business problems
  • Analysing and synthesising the outcomes of user and business research
  • Presenting findings back to the team and various business stakeholders clearly and concisely
  • Providing recommendations and guidance on what, when and how to proceed with the business and/or product.

To do effective research, a designer needs to pick the right activities to elicit answers in the deepest and most truthful way. They need to make sure that those activities are completed without subconsciously influencing how a user responds. It’s a very specialised skill — so much so that some designers have been trained in scientific disciplines like Behavioural Psychology and Clinical Research Methods to do this step effectively.

Interaction Design

Interaction Design (IxD) is the process of inventing and describing how users will interact with your product. It’s sometimes called “Human Factors”. The decisions and recommendations made in this phase come from combining two things:

  1. A deep understanding of your users’ needs, business goals and technical environment. This includes:
    • Demographics of users. E.g. Age, gender, social status
    • Environmental factors: Are they on the train, in the car at their desk? Do they have good internet connectivity?
    • Technology: Are they using a mobile phone? Tablet? Desktop? Both? Do they use Android, Mac, Windows, iOS?
    • Situational context: What time of day are they using your product? What did they do just before or after using your it?
  2. A broad knowledge of all the ways in which current technology can be used to help your users and business achieve their goals. This includes knowledge of different platforms (iOS/Android, Windows/Mac), different interface elements (buttons, forms, fields, wizards, pages etc.), and different technologies as well the appropriate time and place where these things can be used to achieve a desired outcome.

Designers will typically communicate their thoughts on how they plan for users to interact with your product using a mixture of online and offline tools. Sketching screens with pen and paper can be used to communicate and iterate ideas quickly. Testing ideas with users can be done with sketches, or to simulate a more ‘finished’ product, a designer might create an online interactive prototype of your product before it gets built.

Visual Design

Visual Design (or User Interface (UI) design) is when a designer works with the elements of Graphic Design to produce a representation of the final thing. It’s about what the user sees.

It draws on traditional graphic design principles like colour, shape, typography and layout that work together to produce a screen or series of screens that help the user achieve their goals.

Visual design can help or hinder how easy your product is to use. Things that you want a user to tap or touch, click or type into, need to look a certain way so that they understand how to use it intuitively.

The visual design also plays a strong role in creating emotion in the user. How a user feels can be critical to how they interact and react to your product and the impression that is left on them by your business. For example, if something is hard to use, they might perceive your business is difficult to deal with too.

What else does a designer do?

All three specialties that currently comprise the design process are equally important to the success of a product and ultimately, the success of the business. You could do wonderful research, and create slick interactions, but without a well-considered visual component, the user might be left feeling uninspired or confused. Conversely, a gorgeous interface full of beautiful images and a well-executed logo won’t necessarily help a user achieve their goals. Often without research, the interface may have the wrong content or navigation.

Every project is different and so the depth and attention that a product needs to achieve across all three areas will vary. However, the most important thing is, at minimum, to make sure all three are considered in a lightweight way rather than drop one or two of them completely.

Important ‘soft skills’ of designers

Also, there’s a bunch of other things, they’re called ‘soft skills’ that make a designer a great one. Here are those:

Empathy, or the ability to understand and share the feelings of others is critical. Without this, we’re unable to understand how painful or joyful something is for someone else. Empathy allows us to design the most positive interaction with a product or a business.

Communication is a no-brainer and whilst not specific to a designer, it’s what a designer does every single day. They need to communicate with users while doing research, with the team in building software or anyone who has an interest in the product and who need ideas conveyed clearly and concisely.

Active listening is part and parcel of being a good communicator. Asking the right questions at the right time can only come from truly concentrating, understanding and responding to others. It’s much harder to do well than you might think.

Self-awareness. A designer needs to know their own strengths and weaknesses, biases and preferences. Only by knowing these well can they perform effective and truthful research and devise solutions that solve problems in the way users need them to be solved. Crucially, this is often different to the way the designer or others in the team would personally like them to be solved.

Problem-solving is an obvious skill for a designer to have but nonetheless, can be difficult to hone. Yes, there are tools and techniques to learn how to problem-solve more effectively and efficiently but the motivation to solve it *well* is something a little harder to find. On top of this, designers are pragmatic and they use exceptional critical thinking. Nothing is perfect, but it doesn’t mean we can’t aim to be.

Imagination is the engine we use for coming up with new and innovative solutions to problems. The ability to create something from scratch that never existed before is unique and, we’ll be honest, a bit magical. Our designers are innately curious folk. They’re always reading, learning, watching and asking why. It’s this natural inquisitiveness that we reckon gives our designers their great imaginations.

Lateral thinking is tightly coupled with imagination. The ability to view a problem from multiple angles, sometimes unusual ones, is what we think lays the foundation for a great creative thinker. Often, it’s the ability to borrow from different contexts and one’s own life experiences that strengthen this in a person. Whether you have experience or not, involving other humans will always produce more ‘lateral’ results.

Story-telling is innately human. It goes to the core of what we are as a species — but to tell a good one requires practice. Designers can spin a good yarn and it’s important. Not everyone in the team will get the chance to talk to users and so it’s up to designers to convey what they hear and learn from users in a way that’s compelling. Designers need to help the entire team build the same level of empathy for their product’s users so that everyone knows the problems they’re trying to solve, and why it’s important to solve them.

Humility. Let’s face it, no one knows everything. Designers are intimately familiar with the design process and the methods and tools they use to do great work, but at the end of the day, they’re human too. They make mistakes, get tired, under sleep and over-eat too. They might misread a user’s expression, or over-emphasise things occasionally. But, they’re also lifelong learners. They use the power of the team to reduce risk of getting things wrong. After all, great products aren’t built by just one person and a designer is always part of a team.

Where to from here?

Maybe some people will use this as a hiring guide. Maybe designers will use it to improve the way they talk about the value they add to a team. Maybe, and, most likely, I’ll continue to separate the way I talk about my skills online for fear of being under-valued. Or maybe now, I won’t.

September 2021

No one expects me to lift a piano at work

I‘m 67kg, slight build. If there’s something heavy that requires lifting, I’m not the first person that someone asks. In fact, if I was asked, I’d probably decline knowing that it’s likely I’ll do my back or damage something in the process. The last thing a workplace needs is an OH&S issue with an employee, right?

But when it comes to mental health, we don’t have those visual signals that communicate that we might be better at certain types or ways of work and not others. No one can look at me from across the room and see that I’m capable or not capable of managing many high-pressure projects at once, or successfully switching contexts every 2 hours to a different problem with a different group of people. No one can easily see that I work best on a maker’s schedule (in focussed half-day increments) and not on the manager’s schedule of something new every hour. Some find that type of work exciting and interesting – they thrive. Others don’t. And now, in a context where we’re working in distributed teams, no one can look across the room at all because we’re not co-located anymore. There’s even greater natural invisibility of how someone’s coping with the way a team may be working or how a leader may be structuring their day-to-day.

A more nuanced conversation about mental health

Anyone who’s done any work on their own mental health knows that good mental health comes down to two things: self-awareness and being able to talk about it. They’re probably two of the most difficult things for any human to do because, from a very early age, we’re encouraged to do the exact opposite – bottle it all up.

Managers of my past, not all, but certainly most, have excelled at telling me what to do and how to do it. And, with so much time spent telling, they’ve been terrible at listening. They’ve assumed, very wrongly, that whatever they were capable of doing, I would be too. That whatever motivates them motivates me. Where my manager may be motivated by growing a business or team, revenue numbers, and ‘headcount’, they never took the time to understand how I was different. I looked like them, and if they could do it, couldn’t anyone? If I was struggling, I’d just have to try harder.

We don’t assume that when we’re lifting a piano, though.

Listen first, then lean into strengths

What it comes down to is that we’re all built differently – different strengths, different weaknesses. It’s an obvious thing to say out loud but it’s surprising how little it’s acknowledged or forgotten in the day to day of leading people. And so, just as someone may be better suited to hauling a piano up some stairs than others, the same is true for managing multiple projects, switching contexts, working under tight timelines. Some thrive, some barely survive.

Understanding individual’s strengths, then building a team with complementing ones, means not everyone needs to do everything and the team ends up greater than the sum of its parts.

Personally, I find deadlines provoke anxiety, not motivation. I get more energy from a short thank you from a team member than seeing revenue increase. I prefer small, tight teams where everyone knows eachother’s name. I love ambiguity and complex problems and can sit in uncertainty and concentrate for hours to work through something chunky. Sometimes I like virtual lunches, and sometimes I don’t. I value and am motivated by generosity and reciprocity, not competition and domination. The managers who took the time to understand that saw the best parts of me, and, in some cases, friendships blossomed. Managers who didn’t left us both tired & frustrated (and left me looking for another job).

Stopping for breath

It’s difficult for leaders to prioritise stepping back and reflecting with their individual team members on what’s working well and what’s not. In a business where it’s all, “we needed this yesterday”, most managers and employees are struggling to play catch-up let alone assess the possible carnage or opportunities that lay in the wake of forward momentum.

Structuring conversations about reflection need to be the first-step. “In the last two weeks, how have you found the work? What do you like or not like about it? What’s been giving you energy? What’s been taking it away?” It only takes an hour every 2 weeks.

Over time, these moments of reflection accumulate and begin to shape a picture of the individual – not one of their physical abilities, but of their mental and emotional ones. Once we can understand the environments in which each team member thrives, then leaders get to focus on their real work; shaping the environment to suit the individual. If it’s true that people are like plants, then maybe taking this approach will lead to better outcomes for everybody.

September 2021

We can’t help but put ourselves in the centre

Until Copernicus came along in the 16th Century with his radical idea of Heliocentrism, humans believed that the Earth was the centre of the universe; the sun and the planets revolved around us, not the other way around. That’s a pretty long time to think that only to discover that the opposite is true. And now, for the last 15 years of my career, I’ve been flogging the ideas of Human-Centred Design and realised that, by George, I’ve gone and done it again, like every other human before me, I’ve put us in the middle… sort of.

The human ego is pretty big; our strength and weakness. It seems our default is to try and make everything about us, over and over again, until a new Copernicus comes along and says, hang on a minute, maybe we’ve got it wrong? With the world literally burning up in front of our very eyes, maybe it’s time to take a hint from Copernicus and think about things differently.

How did HCD get here?

HCD (aka Human-Centred Design) was noble in its pursuit. But isn’t that always the way? No one believes they’re doing the wrong thing until 200 years after they’ve gone and displaced an entire race of people or obliterated a 60,000-year-old culture.

HCD was supposed to save humans from the corporate colonialists of our lives who hocked profit-oriented solutions to all of our problems. HCD was supposed to be the counterweight to the gravity of big business – a language and framework for teaching profit-centred businesses that, actually, there was benefit in considering the person who would use their product or service. I mean, it’s just the right thing to do, isn’t it? Not make products for people so they pay more, but do it because, well, we’re all human in the end.

But, those businesses pushed back. They did not say, “Oh, I see your point, maybe we do have a responsibility to listen to our consumers and treat them fairly and with equality”. They didn’t say, “Yes, we should only put things into the world that benefit them, even if it means slimmer margins in the end because it’s the right thing to do.” No, instead, they asked us the question, “How can putting humans at the centre of my product and service lead to bigger profits?” And we, the Designers, answered, “Well, if we can’t convince the business owners to do something good for humans for the sake of doing something good for humans, what if we teach them the value of our skills for profit-making first. Maybe after a bit, they’ll broaden their remit to focus on humanity over profit in the long term? And if not? At least it’ll be better than what it was before, right?”

Is it better than before?

We’re now quite a way down that path. HCD has been around for a good while. We’ve gone and taught those profit-making businesses how humans work – what motivates us, what we fear, how we behave – and the net result, one could argue, has been worse for us, not better.

See, it’s because we’ve put ourselves in the centre, again. We’ve failed to recognise that our species isn’t ruler of all, but in fact dependant on all; that we are just one part of an eco-system upon which we depend, not for happiness or contentment, but for life.

We’ve failed to recognise that our species isn’t ruler of all, but in fact dependant on all; that we are just one part of an eco-system upon which we depend, not for happiness or contentment, but for life.

We have fed the profit-making machines the principles of HCD with the best intentions and they’ve been consumed, chewed up, and spat back into the world, with our consent, as ever smaller micro-improvements on all of the things that, in the bigger scheme of things, don’t really matter to anyone but ourselves. Our work has resulted in improved convenience, usability, scale, and access to many products and services that have increased the rate in which we draw precious resources from the ground or throw pollutants into the clouds. That’s the same ground and the same clouds that we need to use to continue any sort of existence on this planet. We’ve been calling it HCD, but what it’s really been is Consumer-Centred Design (CCD). The truly human-centred approach isn’t about convenience, usability, or infinite corporate growth, it’s about our species co-existence with the environment that sustains us.

How do we counteract our biological need to put ourselves in the middle?

If our default is to think about ourselves, and, yes, accidentally, put our very short-term needs ahead of what’s truly important – the consistent, efficient, balanced functioning of the systems that support life on Earth – then we need some way of countering that deep, biological force within us. Or, perhaps, we can start with countering the force of consumer capitalism. The Earth still has plenty left in it to sustain us, biological systems are remarkably regenerative, but not if we’re taking a consumer-centred approach in the way we’ve been doing it anymore.

This is going to be a hard problem to solve. Let’s be frank, it already is. Frighteningly so, to be perfectly honest with you. There is so much systemic change that has to happen to alter our consumer-centred model to a truly, deeply ecological one. But, if the pandemic has shown me anything, it’s that, when we need to, we can adapt. It’s what kept our species here for so long.

I refuse to think we simply just can’t do it, or we have to wait another 50, 100, 200 years. The decline of our species will not be a meteoric, overnight event. It will banal, painful, boring, slow. Generation after generation will not realise that what came before was better, richer, cleaner. It’s happening right now. We’re living it. Generations before mine had much greater bio-diversity in the planet, a key metric for judging how healthy a living system is.

This gnarly problem of ours

This gnarly problem of ours, and I say ours in the broadest, most global sense of our ecosystem as possible, needs a group of compassionate, open-minded people who are brilliant at thinking about complex systems and the interactions and effects between them. A group that’s got the best interests of our species at heart. Empathy, Humility, Creativity – the best that humans have to offer. A group that’s got the means and a relentless drive to make the world a more sustainable place to live for all of us.

We can no longer refer to our life-support system as “The Environment”. As if it’s something to control, manage, pat on the head or admire from a distance. Without it, we don’t survive. We are the environment, along with everything else in it.

Maybe, instead of waiting for our Copernicus, we’re already able to convince the money-making machine of capitalism that the world doesn’t revolve around us anymore. Maybe it’s already possible to use our design thinking superpowers to create new frameworks and models that interrogate the non-human impact of the decisions we make in boardrooms and workshops. To consider the people who are next in our glorious timeline. To invent ways of demonstrating the ecological, social, political and, let’s face it, bottom-line impact of using a wealth of non-renewable resources to make micro-improvements to the levels of convenience we need in our lives. To ask, “isn’t this enough, already?”

Even as I write these words, I feel an inner-helplessness. The experienced professional in me says, “Yes, sure, you go ahead and try that,” as it mentally books a flight to a foreign country to attend another meeting in person. I know this is hard, but I refuse to believe it’s impossible. Humans are capable of amazing things – both positive and negative. I want to be one of the ones that focuses on the former. I’ll die trying to make some positive difference because the alternative is that we’ll all die anyway, whether we go down trying or not.

May 2016

I don’t hate work like I should

This was first published on cogent.co

Sometimes I feel bad about the fact that I like my job. I hear it all the time from my friends and from my family.“Work sucks! I hate my job! Urgh, I have to go back to work on Monday.” It’s pretty depressing to know that the people I care about spend 8 hours a day (and most often more than that) doing things they don’t enjoy. What’s worse is that I’m one of those people who don’t do that. I love what I do and I love the place I work, and, (yes — cue up the world’s smallest violin) it makes me feel guilty.

Work was something to be complained about

My parents are both blue-collar workers and always have been. They raised 3 kids with little to no money. They scrimped and saved every cent to put us through religious education which isn’t cheap. They picked up odd jobs and worked weird hours to make ends meet. Jobs like cleaning toilets at a casino, sweeping streets at a local plaza, stacking shelves at a supermarket and so on. These aren’t glamorous, life-fulfilling vocations. It was their solution to the financial pressure they felt in keeping their family’s heads above water. And, they complained about it a lot but who could blame them. Work was something they just had to do to pay the bills. They didn’t have time for hobbies and we hardly had a family holiday beyond a caravan that a next-door neighbour owned. This isn’t a sob story about my upbringing though, it’s here to explain that my very early view of what work should be was shaped really strongly by my parents’ experience and I suspect I’m not alone. Work was something that was to be hated, complained about, unfulfilling but had to be done.

And now I write this post from a cosy cafe in Melbourne as I sip a green tea and order breakfast that’s far more expensive than if I made it myself at home, and well, it’s hard to admit to myself but I’m about to start working. I spend my days listening to the problems that people have in their lives and then I design easy-to-use digital products that help those problems go away for those people. It’s incredibly rewarding work. Sure, there are days that are more difficult than others and sometimes you end up emotionally deflated or frustrated that due to circumstances out of your control, you can’t solve that person’s problem. But on the whole, it’s averaging out pretty well.

It’s so rare for companies to walk the talk when it comes to living their values that it’s easy to be and remain sceptical about it when entering a new work culture

I work for a company that values work-life balance and, let me be clear, that’s different from saying that they value it. Cogent has a very strong focus on personal well-being and life outside of the office and because of this, I’m able to run a second career as a children’s picture-book illustrator. So, I spend my days improving people’s lives through software, and then get to spend my nights and weekends bringing parents and children together through the wonderful world of picture books. In between those things, I try to manage an auto-immune condition which is really time-consuming. I can’t say that work sucks like my parents did. I actually really enjoy it and well, this comes with its own set of problems.

When I go back home and sit around having a beer with my Dad and my brother (as the males in our family have traditionally done) we talk about work. “How’s work?” My dad asks us both.

My brother complains that being a plumber is hard work. He was digging a trench in the freezing cold the other day and now has blisters all over his hands that won’t heal until the weekend. My Dad contributes in between wheezes and puffs of his asthma medication. He was loading hundreds of 30kg bags onto aeroplanes at the airport as a baggage handler and his back is starting to seize up. He needs to go to the physio now to get it sorted before he can go back. Which, to him, isn’t a bad result. He doesn’t need to go back to work for a while so “he’s got a few days off”. But then, it’s my turn. What am I supposed to say? That work is great? I’m really enjoying it? Should I go into details about my latest round of user testing and how people are thrilled with their improvements to software?

How much am I allowed to enjoy working?

On one hand, I think they’d like to hear this sort of story. In some ways, it’s validation to my Dad that his years of toiling have paid off. He’d be happy to know that I’m happy. But on the other hand, it doesn’t show a great deal of empathy for their back-breaking labour if I talk about a nasty paper cut I got the other day. And, more importantly, how can they understand that work doesn’t suck because, well, it’s supposed to! Yes, I’m using extremes here to demonstrate a point but in my job here at Cogent, we’re swimming against the current in some ways. I work with great, smart people every day. We make things together that are affecting our world and changing people’s lives. We’re not working on a big shiny scale but we’re making a difference to people.

I remember when I applied for a job here at Cogent a couple of years ago, it sounded too good to be true. Most companies will say they value work-life balance, personal wellbeing, professional development, transparency, creativity and so on. In my experience, this has always been lipstick on a pig because most organisations talk the talk but don’t walk the walk. So I approached this role with a certain level of scepticism. But 2 years is plenty of time to find the cracks and to be honest, there haven’t been many. Sure, every workplace will have its ups and downs and we certainly don’t dance around the office with lollipops and rainbows every day but we’re doing great work and, on the most part, I’m really enjoying it.

So, what do I tell my family when we’re talking about work? They don’t understand the details of what I do day in and day out so I tell them that I’m lucky I’ve found a really unique company that supports me in doing what I do, inside and outside of work. I’m lucky to work with the group of people I work with and because of this, it makes work much easier than my family would be used to. I don’t do back-breaking labour but I’m not bored. I’m solving interesting problems and so I go home tired, brain-tired, but satisfied that tomorrow I’ll get to work with the same group of amazing people and solve even hairier problems with them, all in time to get home for tea.

February 2014

The short but important history of design

I was once a really bad designer. There, I said it. I was slow, not very innovative, boring and laborious. I was all of these things until one day it changed, I began to find history interesting.

I’ve never had an interest in history. As a child growing up in Australia’s education system – through primary, secondary and even tertiary levels, history was the subject I found most boring. I loved the sciences and English, practical subjects that would help me in the future. My focus was all about the future during those years – what will I do with my life? How will I get there by the actions I perform tomorrow? What if it doesn’t go as I planned? Little did I know that history was actually the answer to most of my concerns about the future. It wasn’t history that was boring, it was the way it was being taught.

When I met my wife about 8 years ago my career path changed course like I would never have imagined. Among the many interests we shared, history was certainly not one of them, particularly her passion for the middle ages of England; I apologise if the mere mention of such a dry subject just made you yawn. But, like most relationships, couples compromise and we couldn’t always watch re-runs of Seinfeld and Tin Tin like I wanted to each night after work. Believe it or not, there are some interesting historical documentaries and as we learned together about the evolution of manners, the growth of the English language and the birth of modern medicine, one thing struck me – there’s a history of my profession, design, and I knew nothing about it.

The history of design is not something they teach in tertiary design education, or at least it’s not something that I was taught.  The focus was on ‘skilling up’ the students for the workforce or priming them for a career as an academic. What web technology is around the corner? What jobs will exist in 4 years time? What are the components that make good web design? Those I know who were taught the history of design were taught it within the framework of graphic design and it’s evolution from art. I look back now and realise how crazy this is! Why is the history of design not being taught?

Is it because design as a concept (or practice) may be too broad? Perhaps it needs to be understood within the context of its specialties – industrial, graphic and architectural design would all have their own unique reasons for existence no doubt. Or maybe, in the case of digital design, it’s considered ‘new’ or too young to have a history. A bit like comparing Australia’s history as a country with that of, say, the continent of Europe. Perhaps it’s because no one has ever been able to successfully record and link up the historical events that have lead us to the current evolution of design that we practice. The word ‘design’ has a different meaning to almost everyone I talk to, yet we all call ourselves designers.

History aficionados will tell you that the value of studying history is that we learn from our past so we don’t make the same mistakes in the future. I would whole-heartedly agree with them. However, in school we tend to view history through a socio-political lens (I believe they call it ‘humanities’). This lens teaches us some valuable lessons in a very ‘remember the facts’ way. We know WWII happened on a particular date/s and it was started by a particular guy. We remember a bullet-point list of what happened, we are shocked that such a time ever existed and we conclude that performing those actions again, in the same way, is not going to to lead to a good outcome.  The whole world of scientific and mathematical understanding uses a similar model of learning. The lessons learned from past events, experiments and discoveries are recorded and used as signposts about what to explore (and what not to explore) next. Someone told me it was called it “Standing on the soldiers of giants,” but no one told me that design is no different. I wish they had, it would have saved me a lot of time. 

If I reflect on my early years as a designer, I learned very quickly that splash pages were a bad idea, using comic sans in anything doesn’t make an art director happy and telling a client they were wrong and they should listen to you because you’re a ‘trained designer’ was definitely the wrong way to go about trying to rationalise a design decision. It’s the classic “learn through failure” method of ‘becoming a better designer’. 

My big design revelation didn’t come from having these experiences though. What happened to me was that I realised that there has been a plethora of designers who have had careers before me and most of them have already failed at trying to solve the same problem as the one I was solving. Sure, technology has changed how we each did it but it’s still essentially the same problem; communication. In 2008, social psychologist Malcolm Gladwell posed an idea in his book Outliers – you need 10,000 hours of experience to become an expert in something. What he failed to elaborate on is that in design, that 10,000 hours of experience doesn’t have to be yours specifically. The answer to becoming a better designer is to understand history and how to interpret the one common denominator across any industry or domain – human behaviour. A bold statement? Well, let me elaborate.

By understanding the events that lead up to other events and the events that follow those (i.e. history), we can start to predict things based on patterns and we get a better and clearer understanding of human behaviour. Essentially what I’m saying is that we’ve already got big data about how humans react to social, political, sporting and economic events on local and global scales. The question isn’t ‘did it happen?’ anymore; it’s about why it happened and how you go about leveraging this knowledge to influence future decisions. Tapping in to this data to solve the big problems of the world isn’t easy.

I don’t know if I have the answer to this question but I do have an insight in to when it all changed for me – I read the book Graphic Design: Referenced, from cover to cover. Suddenly I understood why the visual world was the way it was. I knew artists’ and designers’ names, their work and more importantly, the reasons why they produced the work that they did at the time they did it. Some responded to social events and political pressures. For example, some expressed support for war in posters, others made posters with the opposite sentiment. I understood why cubism superceded abstraction in art. I knew why the rigid, slick, minimalist Modernist movement was a ‘natural response’ to embellishment associated with Art Nouveau and how that affected the buildings we live in today. It also provided me with an explanation for why we’re now seeing a trend toward flat design in digital interfaces. We’ve just exited a period of design history that will be marked as ‘the skeumorphic era’ where we tried to make digital elements look and feel like the real world; buttons that looked like 3D pushable objects to help ease our way in to interacting with a new type of world. This ‘digital movement’ exposes some underlying human responses to the built and social environment which, if you look closely enough, have happened time and time again throughout history (in this case, it’s that we get bored quickly).

Reading Graphic Design: Referenced was a key moment in my design career for three reasons:

  1. It exposed the value of understanding the evolution of human decision making
  2. It made me crave more of it across every knowledge-domain I could get my head around and, most importantly;
  3. It made me sound like I knew what I was talking about.

A new level of design rationale was injected in to my vocabulary. I found myself saying things like, “Well, the brand values of the organisation are similar to those of Modernism in the 1950s so I’ve selected this colour and these shapes to try and communicate the clean, minimalist aesthetic of Modernist architecture and design that I associate with this brand. Here’s some examples from Frank Lloyd Wright and Louis Kahn to help illustrate my point.” A week prior to this rationale (and not having read the book) it would’ve sounded something like, “I chose this colour because it looks nice and this line is a bit straighter than the wavy option so I thought it would be good to give them multiple options.” This injection of understanding in to my veins was addictive, I needed more of it.

The book I read explained the world of arts & visual communication but what about other fields? What could I borrow from architecture, maths, science, language & culture. I went non-fiction crazy. I stopped reading books from cover to cover because the connections between events that occurred in one field affected decisions someone else made in another. As someone who never really read at all (let alone read about history), I suddenly had 8 books on the go at once. A paragraph here, an anecdote there, a highlighter that was wearing thin. I was addicted to history and addicted to my ability to create better work and to rationalise those creations; the confidence in my own work increased exponentially. It also made me change focus in my career.

I thought graphic design was what I wanted to do forever. I love the visual world, almost more than anything, but with this new found knowledge and love for history my career went from graphic design to ‘design’. My focus shifted from guiding an eye across a page or screen to guiding human behaviour, anticipating and testing the human response to a change in environment. What does this mean? The challenges got bigger, a little more conceptual. I strongly believe I racked up my 10,000 hours in about 4 years by reading everyone else’s stories from eras gone past. Their stories became my reasons.

Since then, I found the world of design consultancy where I was solving problems for different clients every 2 or 3 weeks. I’ve been really lucky to work with super-experts in all sorts of fields so my work has become my play. Because of this, I understand the psychology behind supermarket design, what sensory cues lead someone to choosing whether or not to swim at a beach that ‘looks a little dirty’, the herd mentality of football club supporters, the importance of the tourism trade in Australia and all that it affects. The list goes on. I still practice graphic design (I’m not sure I could ever give it up) but because of my in-depth and constantly growing understanding of how humans interpret the world those designs are basically bulletproof. The frustrating world of subjective client commentary has dwindled over the years – it’s not about opinion anymore, it’s about science.

If someone had told me that design was a left-brained activity when I was studying I would have laughed. Creativity and the visual world is typically associated with the right-side – intuition, feeling and emotion. But the truth of the matter is that there is a science behind producing artefacts that elicit intuitive, emotional responses. By understanding that science through understanding history, the ‘magic’ and ‘mystery’ of design goes away but the feeling of solving a problem successfully only gets stronger as you get more accurate.

Reflecting back on my ‘lightbulb’ moment in my career (about 5 years ago now) I can’t stress the importance of understanding history to any aspiring or practicing designer enough. Not just the history of whatever industry you’re in (or want to be in) although that would be a great place to start because it will likely be more interesting for you; but to understand the forces that shape the way humans (and therefore society) evolves. Collect as many dots about how and why humans have come to exist in the current state of the world because one day, in some design challenge in some niche industry, those dots will be available for you to connect and a problem will simply solve itself.

September 2013

Designers need to get their hands dirty

“In a perfect world” is a funny phrase. My parents used it as a way of encompassing all the things they couldn’t change (but probably wanted to) when I was growing up, “Well son, in a perfect world we wouldn’t have to…” is how they would begin a response to a question I would ask. I caught myself using this scapegoat the other day. I was trying to calm my frustration over trying to change a design decision by a client I was working for that was addressing nothing but business revenue. The social and environmental impacts of the decision were ignored completely and it made me mad.  It also got me thinking; should I give up on this organisation and try to find more satisfying work? Or, do I continue chipping away at this brick wall in the hope of tiny wins. I was at this impasse when I rode the 9:52am train from Huntingdale on Sunday. The obvious choice quickly became crystal clear.

As a designer, I try to influence the decisions businesses make about how to spend their revenue so there is a positive outcome for both the business and the human. But a lone voice in a sea of stakeholders is often drowned out. I’m a firm believer in corporate social responsibility and I’m a firm believer in design’s ability to change human behaviour – for better, or for worse. Ultimately, the decision about how a business is run lies with the business owners, except when that business is public service.

In 2009, Metro trains won the bid to provide Victoria with public transport and since then, services are slowly becoming more frequent and more reliable. However, improvements come at a cost and the cost to the Victorian public comes at the price of being exposed to advertising on a journey. It’s not an unusual model. In 2013, it’s a rare case that I board a train service that doesn’t have advertising in it or on it. Windows are plastered with one-way billboards that block incoming sunlight and remind all commuters that “Sportsbet is here to help us with the best odds”. If gambling is not your thing, as the doors open there’s a vast smorgasbord of products to select from when you’re inside; eye surgery to make your eyes look less Asian perhaps? An impulse-buy opportunity if I’ve ever seen one (and yes, these are both real examples). Sure, I wish this advertising wasn’t all over the place and I’m sure that there are better solutions to achieve the goal but I understand the needs that businesses have to find ways to raise money. They need to pay bills, to keep shareholders happy, or in a perfect world, to deliver better public services.

As a seasoned designer I could rationalise the existence of this advertising. I could easily imagine the meetings that go on within the Metro marketing department about bottom-lines and revenue generation; until I boarded the 9:52am service from Huntingdale last Sunday and saw the following advertisement.

Two scary made-up vampires on Melbourne's train doors

As I stepped on to the train I was greeted by the grotesque imagery of the latest Dracula’s advertising campaign. Dracula’s, as the name suggests, is a horror-themed theatre/restaurant in Melbourne where patrons can go to get the pants scared off them and, by all accounts, have a very pleasant night. This isn’t about Dracula’s though, it’s about public service. My experience on the train raised so many questions about the imagery I was confronted with and, in comical style, the first words that came to mind mirrored the catch-phrase of The Simpsons’ character Mrs Lovejoy – ‘won’t somebody please think of the children.’

Public service is literally a service for public. Regardless of the brief, or the people involved, or the complex chain of decision-making that goes on between Metro, its advertisers and the ‘designers’ tasked with creating the graphics, I struggle to understand their definition of the “General public” audience segment.  Is it not “all people aged between 0 and 65+”?

You don’t have to be a regular commuter to understand the wide range of ages that ride our public transport. From what I’ve observed, 14 year olds might well be the largest age group. They’re too young to drive but want to avoid the mum/dad drop-off at a social event whenever they can. They primarily gather at locations that are easily accessed by public transport (like the movies) where we have laws preventing them from watching movies rated M15+ and over. Why do we live in a topsy-turvy world where the images they are exposed to on their way to their PG movie are scarier and more damaging to their perception of the world than the movie itself?

This ‘design’ doesn’t just affect the tweens. The faces in the poster are placed at a perfect height so that a young bub inside a pram has no choice but to endure the company of these oversized grotesque caricatures for the length of their journey because the mother is forced to park her pram right in front of them on an overcrowded peak hour train. The only thought that comes to mind is reckless design, reckless on every single count.

What worries me is that ideas and images like this can be published at all. I can reel off a list of approximately 6 groups of people (from managers to designers) who would’ve been likely to see this ad and who would have approved it before it was plastered to the train carriage doors. Not one of them thought to challenge it? Every single person thought it was OK? Is it a case of designing in a vacuum? Did everyone fail to realise that the faces would be so much scarier when not encased in some ‘designer’s’ backlit high definition MacBook Air instead of lit by flickering sallow train lights on the last service leaving Flinders street station on a Saturday night? It really shook up my day and my week. My principles as a designer and the principles of the people I get to work with are so far removed from such visual tripe that it makes me realise the world’s problems are so much bigger than my little world; I won’t have enough time in this life to solve them all.

How does a designer with the best intentions get airplay or influence in these decisions in a system that’s not yet ready to listen? The social and economical systems that have been put in place are such difficult barriers to breakdown and they’ve been there well before I was born. Large ‘design’ agencies touting “years of experience” are perceived by blame-averse decision makers in large organisations as the ‘experts’. The experts in what? The experts who constructed the world of advertising and over-consumption as a result of the tools given to us by the industrial revolution? These legacy social & economical systems are now starting to rip at the seams as over-population puts strain on the pillars that these ideas were built upon.  Things change, they always will, so why haven’t we designed flexibility in to our systemic solutions to deal with our own evolution?

The Dracula’s ad in the public space is no doubt reckless but if designers aren’t willing to have these conversations with clients, who will? It’s tempting for a designer to gravitate towards clients that share the same values. If I want to work on sustainable solutions then I should approach companies who are doing it already, shouldn’t I? Perhaps this approach is a little self-serving rather than world-changing though; a little bit too comfortable? Sure, I’d feel better about my life when I come home after an exhilarating day of discussing how wrong everyone else is with an employer who ‘gets it’ – but how will this fix anything except maybe through example? Even then, are the right people in organisations who ‘don’t get it’ even watching or listening to those examples?

Perhaps a designer’s true calling is to work on projects with organisations that can’t yet “see the light”; those who need to be shown what good design is capable of. It’s more than just the visual aesthetic or designing horror ads for public service, good design is systemic. It considers the financial impact of a solution alongside the environmental and human ones. It can protect an organisation from a landslide of new technology or economic crises. Perhaps we need to climb down from our ivory towers and stop preaching from on-high (and to ourselves) about the way the world ‘should’ be and how no one understands us and what we’re trying to do. We need to dive in to the trenches of organisations that still believe that advertising Dracula’s on public service using reckless imagery is a good idea. We need to approach the system from the bottom up; show that this sort of reckless advertising is not OK and a better solution for generating revenue exists.

A designer’s value is in their ability to create culture change in organisations through creating and contributing to products and services that lead to better outcomes for humans, the environment and the bottom line of the business. Without the business facet, it won’t be long-term or sustainable. Taking this approach means there will be a lot of the things we designers hate; too many meetings, not enough action, a lot of ego and not enough collaboration but the potential for someone listening will be greater; the chances of being heard are immense. Large-scale & lasting systemic change can be enabled. There’s no doubt this is the hard way, but maybe it’s the only way.

March 2013

Makers gonna make

As I meander through stories and biographies of artists and designers (both past and present) a common theme emerges – makers make because they feel they have to and they can’t imagine doing anything else. What’s not clear yet is the motivation. Does this insatiable need to make stuff come from an in-born sense of insecurity? Do we as makers (writers, visual artists and performers, myself included) send our thoughts out in to the world to elicit a response from another being that might answer the deep-seeded questions we have about who we are, why we’re here and how important we are in the big scheme of things? Perhaps it’s that we’re all hopeful narcissists – we’re chasing fame and fortune or hoping to be cited as a key influence in the progress and understanding of human-kind?  The benefits of being an artist or designer could very well lead to these outcomes, but what happens to us as makers if the response we get is not as we had imagined or, worse yet, that there is no response at all?

As makers, why do we yearn for the approval of others – our clients, our employers, our fellow-makers? Why do we chase jobs that dangle the carrot of opportunity to work with “high-profile clients” or “global brands”? Why do we get this immense sense of pride when we see our work strewn across public space on billboards and banners? Why do we crave for our art director to say “yes, nice job” or for the client to hug us with gratitude after a pitch? We know that the work we do is mostly short-term, it’s a fleeting campaign or artefact or website that lasts for a year or two in the public eye. We work long, hard hours on this because our own sense of self-worth relies on it. We don’t judge ourselves on our output but rather the response to it – we don’t feel we’re ‘good’ designers if we do work that isn’t recognised by others to be ‘good’. If a large organisation like BMW or Coca-Cola thinks we’re worthy (because we know they’ve got the money on the line and wouldn’t risk it on a ‘bad’ designer or artist), they obviously know what they’re talking about don’t they? Do they know good design?

My wife and I are ‘makers’. By day we design, by night we write and illustrate. The thrill of producing something from nothing is time-consuming and heart-wrenching but equally exciting and satisfying. We often question our motivations. Was it inevitable, based on our personalities (and our genes) that we would be makers?  Do we really need the approval of others as evidence to ourselves that we matter or that we exist? We know that given the opportunity, we can influence our built environment on a large-scale and so our perfectionist selves toil and sweat to make sure that our thoughts and ideas are heard, understood and used by others. For some reason, we feel we must “do” and maybe asking why is  a mistake. Maybe there isn’t just one answer.

Whether we are makers or not (and whether we realise it at all) everyone is responding to their world all the time. We consume gigabytes of information daily – each piece shapes our opinions and gives us our own slant on the world and all of its problems. For some reason, makers need to tell everyone else (whether they want to listen or not) their version of events. We do this in written word, visual art or performance but often, these things don’t have financial benefits so we turn to design. It sounds like a pretty good deal initially – with design we get to make (which is really important to our identity) and we get paid for it which is really useful if we want to continue living. When we leave our educational institutions we present our degree and folio to any employer who will give us 5 minutes and proudly declare, “I did it, I did it!” as if we’re the first ones to have ever had a mediocre design education.

After the first few years, one comes to realise that university, tafe or any isolated academic environment does not train you to become a designer – it’s missing that one vital ingredient, the client. It’s only through repeated disappointment (when your idea gets thrown out by the client or the art director) that you begin to learn what design is – a compromise. For a while, like any grieving stage, you close your eyes and pretend that it’s not happening. You believe that one day, if you persist, someone will see your genius for what it truly is and your ideas will matter.

Occasionally, as random as a poker  machine pays out the top prize, a client says yes, yes I like your work, you matter . You get that uncensored, unchanged design idea through the client gates and out in to the real world. The billboard gets printed, the brochure distributed, the flags don your vision in the city square. The sense of pride you have is immense, almost unmatched to anything before. This feeling feels like it will last forever, perhaps years, especially when you win that coveted industry award and have a trophy to show for it. You officially call yourself a prodigy and believe you’ve now got the experience to take on the art director for his job. But in reality, time passes – 1 month, 2 months, 6 months slowly ticks away as sure as a metronome keeps time. The irony of course is that the feeling wanes, the drudgery of client negotiation returns, the endless days of concept re-work consume your every thought and you persist once again, hoping that one day, the client will say yes to your unchanged output soon.

There’s a problem with this model for the makers. Design masquerades as a paying seductress. It lures you with its siren song singing, “Come forth, channel your creative energy (i.e. your insatiable need to make) in to something that will give you money to live. You will get that feeling of maker pride, I promise. The world cares. Your creative director cares. You can make a difference.” But the reality is, over the course of one’s 40-year career it will probably amount to 17.3 months of  pure ‘maker joy’ (if that’s what you want to call it) but hey, who’s counting?

I should pause for a moment before a rant descends upon me; despite my cynical outlook thus far in this post, a professional design career is not all doom and gloom. I’m sure I could do no other job – getting paid for those fleeting moments of ‘maker-joy’ is somehow addictive. Like the rat who continues to run the maze looking for the cheese: he knows it’s there somewhere, he just needs to keep looking. It’s just like art really, you may paint 10 paintings but only like one, you may compose 70 songs and have barely enough for an album. What design gives you that art does not is the relationships you build with the people you work with and those you work for. This is one of the unsung rewards of any design career. I’m quickly learning that these relationships are probably the most important and rewarding aspect of professional design. My career is only 11-years old and I’ve had the pleasure to work with and learn from some very talented thinkers. With that said, whilst I may be proud of the work I produce in my professional career, what I’m finding is that it’s not scratching that ‘maker-itch’ within me often enough. I need to tell & show people my version of the world without filtering it through the lens of creative director or client.

Art (or as I like to call it, self-expression) is playing an increasingly important role in my life outside of my professional design career. Sure, it means late nights but the late nights don’t matter when you’re doing it for yourself. It’s also frustrating, sometimes the maker itch simply can’t be scratched because the day has been spent negotiating a way through yet another client critique minefield and by golly, that endeavour is tiring! But, it does not, it cannot stop my overwhelming need. I’m a maker and I will continue to make; not for the gratification of a client and their business objectives, not to reach my KPI’s or climb another rung in the professional ladder, not even to win one of the 4-million awards that the creative services industry dangles in front of businesses every year to try and help them evaluate their self-worth; none of that business-y stuff matters when your goal is to tell your story and to tell it your way.

I make! Not because I’m insecure but because I’m actually too secure; perhaps too sure of myself – just like I was when I wandered in to my first design job. I’m too sure that my version of events counts. I truly believe I can affect positive change in our world and the harsh reality is that there’s a real chance that the client’s budget may not  help me get there nor will the creative director’s opinion of my colour palette selection. It may not count to anyone except me, but, if the output of my own self-expression can inspire someone else to tell their version of events, then the reward for me isn’t in the  response  from the outside world, it’s in the contribution to it.

If this message in a bottle gets to a young designer out there about to embark on their career, or a seasoned designer who once had dreams of being an artist or performer when they graduated from college because they had the same urge to make, I leave you with this:

Find some time in your day (it need only be 1o minutes) and sketch what’s in front of you, write down what comes to your mind or play that favourite series of notes on your piano that you used to enjoy so much. Every mark you make, physical or digital, tells your story and tells it your way. Who knows where another soul will take it or what meaning it might have for them. We can choose to share our view with the world or not – our own version of events is too important to exist only in that little film projector in the mind.

January 2013

You are not creative unless you create

The word ‘creative’ has become a label that we give to those people who make things and make them well. Whether it’s the Don Draper creative director of your workplace or a crafter in the far off lands of some remote Siberian community, a ‘creative’ must have two qualities – they must be able to generate ideas or physical objects and be great at it. Unfortunately, we’ve evolved this term to be somewhat misguided. The problem I have with our current description of a creative person is in the ‘great at it’ bit. What is ‘great’ anyway? You can use “good”, “awesome”, “out-there”, “innovative” – whatever positive feeling you like; the key is not the word but the fact that we’re only comfortable in labelling someone as a ‘creative’ person if the ideas and artefacts they put out in to the world are judged to be of significant value or surprise. We don’t refer to people as ‘creative’ if they come up with ‘bad’ ideas or ‘silly’ ideas. Those are the sorts of people we label as weird, stupid, crazy, eccentric or, worse yet – not very creative.

I propose a new frame of reference, a new meaning for this sought after sense of self-worth, this label of ‘creativity’ we so aspire to have bestowed upon us by colleagues, friends or family. I propose ‘creativity’ need not be a judgement of value but rather a judgement of repeated behaviour and repeated behaviour only.

Consider the following words:

Negative
Dismissive
Active
or almost any of the words here

Now consider them in context of how we ordinarily describe someone. For example, “Nancy is a really negative person.” When we announce this fact about Nancy it is not implied that she is particularly good at being negative. We don’t seem to grade her ability at it either way. When we label someone as negative, dismissive or active we seem to be satisfied with the fact that all they need to do is to do it a lot. If Nancy was in a bad mood for a single afternoon we’d hardly describe her as ‘negative’. We’re more likely to say “Nancy is having a bad day, she’s normally much nicer than this.” It’s not until we see a repeat of the behaviour again and again that we begin to feel comfortable and correct in giving the label and describing that person in such a way.

How does this relate to being creative?

I’m not sure why but there’s an inherent value judgement with the word “creative”. I see that the Melbourne School of Life has a sold out session called “How to be Creative” in February led by Sarah Darmody. Yes, that’s right, it’s sold out; one of the few that have. It promises to demonstrate and discuss techniques, give a guide to boosting ‘creative confidence’, identify motivations for creativity, creative triggers & innovation and here’s the clincher… how to handle criticism. I find it ironic that we’ve talked ourselves in to this corner that means to be a creative person you not only have to generate a plethora of ideas but you’ve also got to have a thick skin, have ‘confidence’ in yourself and be always on the lookout for the right ‘trigger’ that might just lead to that new idea, the next ‘game-changer’.

I won’t be attending the School of Life lesson but it begs the question in my mind – How should one “be creative”? The reality of creativity, just like the negative Nancy example, is that to be known for something you just need to do it; again, and again, and again and again. By reducing ‘creativity’ to its root word “create”, the value judgement is removed and the simple fact reveals itself – to be creative you just need to create. Save the value judgement for after the creating when you can reflect on your creations without it affecting the outcome of what you’ve created in the first place.

In the pursuit of personal creativity, there are no rules to creating, one creates what one wants to create. Whether it’s something artistic like a watercolour painting, or something more practical and “everyday” like a doorstop to stop the backdoor from flapping about in the breeze on a hot summer night. Regardless of whether your creation is good, bad or ugly, the habit of creating anything at all simply provides you with the practice you need to become ‘better’ at it. Just like any sport, craft or fine art. Once you become better at it, confidence increases.

Of course, it’s easy to sit in the ivory tower and preach “make more make more” but one can’t isolate the human emotion that goes along with what we’re calling ‘creative confidence’ these days. Creative confidence is validation from the world around us that what we’ve created is somehow valuable, aesthetically pleasing or useful (or all of the above) to those who are not the creator. If other people tell us our creations are good, they must be… right? Why do we value the opinions and feelings of others more than our own judgements?

The great thing about our time and place in history is that we’re living in an age where any pursuit to create can be broadcast via blogs, twitter feeds or facebook to masses of people – incomparably so to any other time in history. Our audience is no longer those who walk by us as we’re busking or artmaking in the streets of an isolated city – it’s everyone with an internet connection. When it boils down to mathematical probability, there’s going to be at least one other soul (but likely many hundreds and thousands of souls) out there in the world who find anything you create to be either valuable, aesthetically pleasing or useful to them (or if they’re lucky, all of the above). If creating for yourself, there are no risks, only rewards. If you do not make any money from your creations, nor find anyone else who shares your passion for whatever you create you first need to ask yourself whether or not that was the point of what you’ve created. If it was, you were not creating for yourself. If the motivation is to create because you want to be more creative, then the process becomes the reward and you can be satisfied with your own efforts. You alone can bestow that elusive title of being a ‘creative person’ upon yourself and wear it proudly wherever you go.

Note to ‘creative professionals’:

For those of you who might be designers and whose job descriptions include ‘being creative’ – the process, motivations and output of ‘creative thought’ do not necessarily apply because the definition I pose for creativity here is not the same as the definition that most ‘creative’ workplaces use.

Creativity in the professional setting is much less about ‘creating’ and more about ‘thinking differently’ or thinking innovatively – coming up with a ‘new’ idea, something that no one else has thought of yet. The focus and value proposition for these agencies is not the process of creating, it’s what comes out at the other end. With that said, the rules of personal creativity still carry immense benefits for individuals working in these environments.

By being in the habit of creating for one’s self one can assume that a creative person by my definition will in fact come up with more ideas than someone who does not practice creating (even if those ideas aren’t initially to the creative director’s liking). By coming up with more ideas you are more likely to find an idea that, at the very least, gives your colleagues or art director a springboard to bigger and better ideas. The advantage of ‘creative’ workplaces in this context is the concept of collaboration. The power of multiple brains (that all come pre-loaded with their own different experiences and intepretations of the world) working on the same problem is phenomenal. This is the basis of ‘brainstorming’ and is a useful process for finding connections between ideas and concepts that may not have been found without the magic mix of different minds interpreting the one thing. It doesn’t work unless these minds can tell the other ones what they’re up to but  ‘creativity’ in the workplace (or innovation to be more accurate) is the subject of a much longer discussion.

 

July 2012

Staring in to the fire: The benefits of alone time

Over the last year or two, over-connectedness in society and our inability to “disconnect” from technology has been a recurring theme in my incidental reading. Much has been written and spoken about the negative effect that it is having on the way we process information and communicate with each other. Whilst many of these essays are quick to point this out, not one tries to suggest a practical strategy to counteract our addiction to real-time updates. To me, it seems there’s just one logical strategy; increase the amount of time we spend thinking about, well, nothing. What’s the value proposition? We’re guaranteed to be more creative.

Throughout my internet meanderings a variety of writers, critics, scientists and essayists have tried to coin a simple phrase that encapsulates moments of rest for our brain. It’s been called “Creative pause”, “Idleness”, “Gap-time” and the “Empty box” to name a few. Whilst no one can agree on the term, there does seem to be consensus that as technology demands more of our attention – as we fill our daily opportunities for alone-time with a quick refresh of our pocket-dwelling internet devices – we’re starting to realise that there is tremendous value in letting our minds wander. We’ve also come to realise that one of the best environments to facilitate this is through moments when we’re alone.

Everyday our brains are being bombarded with tidbits of information. 30 second video clips, the internet’s best one-line quotes, the branding and advertising as we walk through public space. The 5 minute phone calls, the 4-hour Skype ones, the 100s of emails, the 1000s of friend requests and an excess of tweets to follow and read. There’s now a piece of information designed to fill any gap in our lives. We feel that some of these bits of information are important enough to take action on, others merely permeate our attention momentarily before the next distraction takes its place. While this information shrapnel is being sucked in to our brain through our senses at every potential moment that we have for some downtime in our lives, when do we get the chance to put these seemingly disparate jigsaw puzzle pieces together in a way that is meaningful?

There is an expectation in our 1st world secular culture that taking a moment to be alone, to ‘space-out’, to think about ‘nothing’ is wasting time. We tend to feel that these moments might be better spent consuming more information or outputting something of significant value in to the world. One has to wonder how this came to be. I’ve been reading “The rituals of dinner” by Margaret Visser and the book talks about the evolution of manners in our society and how children learn to ‘behave’. I couldn’t help but wonder whether the societal construct of ‘manners’ could be partly to blame for perceiving alone-time so negatively. Let me present a scenario.

Someone is standing at the corner of a room at a large party. There are large groups of people mingling, laughing, telling stories but this person is not holding a drink, nor a cigarette, they are not talking to anyone. They have their hands in their pockets and they are just looking out of the window. Someone who does not engage in conversation when there is an opportunity is somehow extricated from social circles. They are judged to be either uninteresting or snobbish, either boring or unattainable. Sure, there may be an extrovert who may try to (in their words) “bring them out of their shell” but inevitably this person is disengaged and prefers to be alone. We assume that if no one else is conversing with this person that other people have tried and failed and because of this, we make our own judgement about them and most of the time it’s to find someone else to talk to.

For some reason, and perhaps it’s learned behaviour, we never make the assumption that perhaps that lonesome person in the corner of the room might just be thinking, or having a rest from the everyday. To avoid being pigeon-holed and to avoid automatic social rejection from a group of people whom we do not yet know, we’ve come to rely on the smart phone. The internet-enabled digital device in our pockets has become the default broadcast mechanism that we use to tell those around us that we are still part of someone’s tribe. Today, a person standing in the corner of a room at a party fiddling with their digital device is assumed to be a social equal to the flamboyant story-teller who mingles and meets effortlessly with people they may or may not know. It’s ironic that it’s not until we’ve designed ourselves in to a situation where every beep from our pocket signifies social acceptance within a community that I’ve realised that the process of not-doing may be just as important to us as the process of doing.

Being alone at a party, restaurant, bar or other public space without a phone in hand might seem strange to some but within a religious context such behaviour is normal if not encouraged. Whilst I’m not a member of any religious faction, I grew up in a family that was devoutly religious across a diverse range of faiths. Regardless of their different belief systems, a common thread ran through them all – their interpretation of alone time was labelled ‘reflection’ and it was an essential aspect of practicing their faith.

For some reason, the word ‘reflection’ conveys a sense of meaning or purpose to the activity of being alone. Reflection from a religious viewpoint is essential for well-being. It’s your time; time to think about yourself and your place in the world. Time to consider your experiences and how they have affected your point of view on the world. Reflection in a religious context isn’t ‘weird’ and it doesn’t mean you have no friends. In fact, the more you ‘reflect’ the more ‘devout’ you are perceived to be. Regardless of the religion, be it the different strains of Christianity or the different religions across eastern and western cultures, all of them contain this common interpretation of worship. Why is it OK to do this within a religious culture but not in a secular one? Why aren’t designers doing more reflecting?!

The avenues that demand our attention in technology-driven lives are increasing more than ever and trending suggests that it’s not slowing down anytime soon. Our 1st world culture has been habitualized to the hyper-consumption of information and resources and as a result we as humans easily fall in to the trap of  “FOMO” or ‘the fear of missing out.’ The fear of missing out is the fear that if we don’t keep up to date with every input of information from all parts of our lives (whether it be work, social life, general interest or world affairs) we might miss out on some important piece of information, that for some reason, might change our lives for the better. This anxiety of being ‘left-out’ or not knowing what someone else knows, seems to be the driving force in motivating us to fill our gap-time, to sacrifice creative pause, to avoid idleness or reflection at all costs.

Whatever anyone calls it, religious or otherwise, for me reflection is the time our cave-dwelling ancestors had for sitting around a fire and just staring in to the flames. Those moments when conversation played second fiddle to the aimless mind-wandering that our brains needed to do to recover from a hard day of hunting and gathering. Not only did it give our brains a chance to rest but it also provided the opportunity for improving the way we did things. What went wrong today? What went well? How could we improve the hunt next time? It gave us the opportunity to assess our life experiences internally and come up with solutions to those problems. It let our imaginations run wild with ‘what-if’ scenarios before we broadcast them to the rest of the group and as a result, it created the perfect environment for that buzz-word of the 21st century – innovation.

I’ve written about the value of time and the process of incubation in a creative context before. I know and practice the importance of letting a design problem ‘incubate’ for days before the inevitable lightning bolt of creativity strikes and the solution just presents itself naturally. It’s not until right now though, sitting in this coffee shop alone, without my wife or colleagues, family or pets that I realise that life experiences also need time to incubate. We need to create the time to reflect and let the natural connections form between those bits of shrapnel that build up on our brain like moss on a rock left out in the rainy forest. The irony for me is that this moment as I write this essay, is the proof that this time alone can yield positive results and help me create.

One of the barriers to finding alone-time is of course time itself. Many of us wake up in the morning, check our digital calendars for the day and see a continuous block of colour telling us that yet again, today is another day that our time is devoted to someone else. And yes, living in the habit of day to day means that those many appointments are simply unmovable. Whilst at first this might seem like an impossible barrier to overcome, looking ahead in the calendar reveals gaps in our dedication of time to others. Whether it is next week or next month, there will inevitably be moments that are not yet scheduled; we see ‘gap-time’. Identifying this gap-time is step 1, step two is to act upon it. The problem shifts to perceiving scheduled alone-time as less important than anything else that may come up where someone else is involved and that’s a tough nut to crack.

One of the biggest barriers to keeping my scheduled “fire-staring” time (yes, that’s what I’m calling it now) rat­her than devoting it to someone else comes down to my parents training me too well in manners, in being unselfish. There is, unquestionably, a level of guilt associated with reserving some of my time for me. Not only has it been bred in to me that being selfish with my time is rude and should be avoided at all costs in order to improve any relationship with another human being but it’s perceived as less valuable also. If I devote my fire-staring time to someone else perhaps I’ll benefit from it in a more tangible way than if I keep my time to me. Perhaps the promise of a free meal or a case of beer from a friend, or a kiss from a member of the opposite sex, or even just the experience of interacting with other human beings will be more valuable than simply being alone and “thinking about nothing” for a while.

Fire-staring hints at the potential for value whereas someone else promises it. ‘Staring in to a fire’ isn’t a socially accepted excuse for not devoting time to someone else. Of course, some people need less fire-staring time than than others. This presents in own barrier in that the  value of it is harder for those people to understand. These differences in perception can put strains on social relationships. There is the fear of being alone; of being ‘left out’ of future social invitations that niggles at the back of one’s mind, “You stay here and stare in to the flames whilst the rest of the tribe goes hunting”. With this said, I’m now more confident than ever in knowing that the benefits of fire-staring time can manifest themselves in ways we can’t predict and they can be on a scale that can be more beneficial to the social fabric in the long run. It’s for this reason that fire-staring time has become one of my most valuable appointments in my calendar.

My own experience has shown that fire-staring time is a catalyst for analytical and creative thought about problems and experiences I wasn’t even consciously aware that I had; problems that I don’t have time to recognise during other parts of my day. I write more, I read more, I draw more and I have the mental sustenance to create more. Fire-staring recharges my batteries. It allows me to join some important jigsaw pieces together and throw out the ones that I’ve been holding on to. I’ve had time to look at them properly and realise that they don’t fit my bigger picture. From an outsider’s point of view, I can understand that the correlation between fire-staring and creativity seems tenuous but like with most things that come naturally to us, it’s difficult to identify them let alone explain them to another. I only know that it happens through observation of my own habits and creative output – the jigsaw metaphor sums up how it feels.

For me, fire-staring has tremendous value in giving me time to interpret my world. Even if there might be a time where there will be no output it’s become an intrinsic and regular part of my life; like collecting a bucket of shells at the beach until the bucket is full. One has to take time to empty the bucket and sift through the shells to find ones that go together in some way. When these are organised, I can proceed to make a mosaic. Once the bucket is empty and the mosaic is made, the process of filling the bucket starts all over again. This regular and rhythmic approach of collecting, sorting and creating, like any habit, is intrinsic to its success.

What I find fascinating about all of this is that although our current use of digital technology is filling our gap-time and preventing us from having the time to sort our buckets of shells, it actually has a very important role to play. It’s not the technology that’s the problem, it’s the way we’re using it. The beauty of digital technology is that it now gives us the opportunity to collect shells from beaches that generations before us have never had the chance to scour. It provides a wider breadth of shrapnel from which to make our mosaic and we don’t even have to leave our own homes to find it. It means we have the opportunity to make brand-new, never-seen-before connections between vastly different pieces of information in order to form new ideas and solutions to the problems we encounter in our lives and culture. Where the problem is right now is that we need more time to sort it all out and we’re just not making that time for ourselves. If creativity truly is the act of making new connections, our ability to broadcast a plethora of cultural habits to each other can allow us to cross-pollinate those different ideas, recipes, languages and values that we’ve never had the opportunity to put together before now. We have the chance to be the most creative version of humanity yet – how exciting!

I hope that as a society we can come to recognise the value of fire-staring on the same level as we do our natural resources of oil or water. Perhaps then we can make changes to our cultural values to instill habits in our children for creating legitimized and accepted fire-staring time in our connected world. By doing this I truly believe we can propagate a society where creative thought and problem solving aren’t activities reserved for the marked elite (or those who label themselves as ‘creatives’) but one where that sort of thinking happens for all of us on a day-to-day basis. The opportunity for innovative thought is immense. Through making fire-staring ‘par for the course’ in the way we live our lives we’re likely to have a more ‘life-driven’ approach to developing better ways to live and work. We’ll solve bigger problems (and more of them) as we collect jigsaw puzzle pieces from places that previous generations could never have dreamed of going not so long ago.

August 2011

Ditch your designer label and make people happy

Some time ago I posted quite a scathing view of my perception of the job title “user experience designer”. As life moves forward, one learns lessons and so it’s ironic that I now write a post as, believe it or not, a “user experience consultant”.

Let me first start with this, I still very strongly believe that “A project guide to Ux Design for user experience designers in the field or in the making” by Russ Unger and Carolyn Chandler contains a completely inaccurate definition of “user experience designer” and “visual designer”. If you can take anything out of that previous post, take that. Those authors have tried to define a Ux niche and have done so poorly. With that out of the way, I move on to something a little more important.

Having recently been a piece of meat on the job market, hounded by real-estate agents who sell people instead of property (they call themselves ‘talent brokers’, others call them ‘recruiters’), I learned within the first few weeks that my skill set had been (and is) very difficult to find. I could code (both front-end and back-end), I was very strongly skilled in branding/identity and ‘visual design’, and I could consult, discuss and strategise with clients at the very early stages of the project. But before this turns in to a paragraph about how great I am, I don’t mean to boast. I don’t pretend to be awesome, I don’t think I am. I just love to solve problems and it seems that over the 4 years that I’ve been living on my small agency island in Melbourne, the landscape of the outer digital continent has shifted a lot. To have one person capable of taking a client from conception to delivery just doesn’t exist anymore – or at least perhaps the perception is that it shouldn’t.

Maybe I’m the last of a generation of web designers who are capable of doing  “web design” in its entirety. Students that have approached our studio for internships and work placement come with a very clear idea of where they think they would fit; “I’m a front-end developer specialising in Javascript frameworks like jQuery and Scriptaculous,” or “I’m a visual designer and my specialty is designing interfaces for iOS”. These are super specific areas and for grads to be ‘specialists’ without having worked in a professional environment seems a little daft to me. How do they know that’s what they love or even what they’re good at? Many of them have made their decisions based on whether they had more fun doing 1 semester of HTML/CSS as opposed to 1 semester of Photoshop tutorials and which one they got higher marks for. It’s just not possible and dare I say does not bode well for the quality of our future junior design graduates (but that’s a discussion for another post).

The specialist and generalist debate has raged forever and this post certainly won’t answer it. I won’t even aim to discuss the benefits of each because that’s been done on countless websites the world over. 12 months ago I argued that Ux was a just a spin-off of the role that I had at the time, a visual designer (‘interaction designer’ was the official title). I considered myself one of those cool hipster designers who loved type, colour and communicating to people using my graphic design skills. I took umbrage at the implication that I didn’t think about the end-user when I was designing. That I was just applying brand guidelines and making things look pretty and that a Ux designer is the only one in a project who had the interests of end users at heart made me mad. What’s different now? Well, rather than continue to insult the authors about their crappy book, I decided to be proactive and research.

Over 12 months I’ve immersed myself in “Ux” – which, for reasons I’ll explain soon, wasn’t a big stretch for me. I’ve bought hundreds of dollars worth of books, spent countless hours on forums, read and watched podcasts and attended local events; pretty much anything I could get my hands on or my head in. What’s my conclusion after all of this invested time? Well, I’ve decided to throw out the notion of ‘specialist’ and ‘generalist’ and job titles all together to embrace one fact – I am a designer.

Wow, it feels good typing it. It’s taken me 7 years of professional design experience (and 27 years of life) to come to this realisation and it feels equal parts prophetic and obvious. It’s easy to get caught up with job titles when you muse about your career. In my own recent experience it’s been a real consideration for me, “What will my next employer think if I’ve got “User Experience Consultant” on my job record instead of “Cool Hipster Graphic Designer from an Ad Agency Guy”. Of course, it’s still a professional consideration but in the long run, how much does it mean? Job titles in the online industry will be split, merged and renamed as long as technology keeps shifting the goal posts, case in point: “New Media Designer”. Does anyone even remember that one? One goal post that technology will never shift however is the one to improve the quality of life of people. If I get to do this on a day-to-day basis by making digital or physical products in the future then what difference does it make what the fashionable term in business is for it when it’s time to change jobs?

Don’t get me wrong, I’m not saying “User Experience Designer” is a fashionable term. It’s just an evolution. Some have already dropped the “User” part to “Experience Designer” to imply less of a digital focus. “Service Designer” has been around for years and if that doesn’t include any elements of ‘experience’ design, then I’m not sure what does. I’ve been an “Interaction Designer” for 7 years. What does that even mean? It doesn’t imply technology but 95% of my work has been in the digital realm. What differentiates an “Interaction Designer” from an “Industrial Designer”? An industrial designer designs physical products that are supposed to be ‘interacted’ with right? I think you probably get my point.

It’s not really a surprise that Ux seems to draw people from all backgrounds. Many Ux designers I’ve met have graphic and/or industrial design backgrounds and many of them come from scientific backgrounds like cognitive science and psychology. These 2 types of people no doubt have the same overwhelming need to want to help people. The fact that designers and scientists are now working together in digital content delivery is really an inevitable natural progression and also really exciting.

The tertiary education system tells us that we can’t be generalists. Specialists get jobs; so pick a stream and go with it, whether it’s in the realm of science or design. We end up following the subjects that are more fun or we choose the ones that we excel at. We then approach agencies or health services who tell us we’d be the right fit and begin a ‘career’. I was once told by one of my design professors that although designers have the skills needed to solve any problem, the first job you get out of university will essentially define the stream of design (i.e the type of problems) you’ll follow and solve throughout the rest of your career, you just get quicker at solving them.

What I seem to have found with Ux is an emerging industry where like-minded people, people who want to help other people, are joining forces and putting people first. We’ve all tried the ‘traditional’ jobs that are out there; companies stuck in old ways where tried and tested branding and design from offline media should take precedence over usability and accessibility when it comes to digital. It works in the offline world so it should equally apply to the online one and the business isn’t setup to adapt to any change so stop trying to change the system! I imagine it’s the same in the health industry but can’t speak for it implicitly.

The phenomenal uptake of digital technology and the rise of ‘casual computing’ is like nothing our global society has ever had to deal with – we’ve never been able to transfer information so fast across such great distance. The problem is, we also don’t yet have an in-depth historical record of mistakes and failures for digital content delivery from which we can learn like we do with drug/health administration or graphic design and branding – services that have evolved over decades. Of course, we have learned some important facts about what does and doesn’t work online, like our beloved ‘splash-pages’, may they rest in peace. The fact is we learn to improve by failure and if digital content delivery is going to be the way we live our lives, the stakes will only continue to get higher as it affects everything we do; health, advertising and professional services all the way through to mining, forestry and agriculture.

User Experience design is the next logical step in our evolution to use the universal principles of design alongside whatever the current state of technology is to make things that fulfill our need to engage with digital content and, more importantly, each other in a useful and usable way. Call it what you like.

April 2011

Branding matters for cigarette companies clinging to emotional connection

A landmark decision has been made by the Australian Government to remove branding and ‘design’ from cigarette packaging in a bid to stop (or at least reduce) the incidence of smoking in our country. It’s clear that since that decision has been made, we’ve come to know and understand the value that tobacco companies put on design. It’s positively refreshing to know that design has been recognized by our Government, as well as the tobacco industry, as a key influence in the perception of ourselves and our society.

During the last Australian federal election we were bombarded with Labor and Liberal campaign messages telling us why we shouldn’t vote for the other side. But, whilst the red vs blue battle was being fought, another, more devious war was waged; that of the Alliance of Australian Retailers Pty Ltd on the laws recently passed in that would prevent them from being able to use design and branding directly on their product packaging.

When I saw the first advertisement run by this organization, an organization I had never heard of, I was curious. Who are they? Can I vote for them? Are they an alternative to the two parties in which I had absolutely no loyalty or interest? Their TV advertisement sounded legitimate enough; it even finished with the typical political ad sign-off of someone speaking really fast about the person/s that authorized the advertisement. I felt myself compelled to find out more.

Who is the Alliance of Australian Retailers Pty Ltd? Well, first note the ‘Pty Ltd’ bit; that’s right, it’s a company; not an organization, not a community, a company.  According to their website they are “Owners of Australian corner stores, milk bars, newsagents and service stations who are fed up with excessive regulation that is making it harder for us to run our businesses.” Note how specific their membership is. Considering it’s an alliance for retailers it’s interesting to see that it doesn’t include the likes of the big retailers; there’s no Myer mentioned, no David Jones, no Woolworths, it appears to be just the small ‘aussie battlers’ whose core profits revolve around sales of cigarettes. If you take a closer look at the site you’ll find out who the major sponsors (they call them supporters) of this so-called “Alliance” are; the footer goes on to read, “We are supported by British American Tobacco Australia Ltd, Phillip Morris Ltd and Imperial Tobacco Australia Ltd.” With that said, do I really need to go on?

That’s right, the Alliance of Australian Retailers Pty Ltd is a company that is funded and run by Australia’s largest Tobacco companies under the guise of a group of people with whom most Australians are proud to support and identify with, the aussie battler. It’s clear that these giants of the corporate world are not happy about the removal of ‘design’ from their packaging and I’m convinced it’s not because they would rather our society be enriched with colour, type and visual interest.

The proposed new plain packag design. Image courtesy of: www.ashaust.org.au

 

There have been many reasons given over the last few months as to why this new legislation, from the Alliance’s point of view, is a bad idea. Some sources have suggested that it would make it difficult for small shop owners to find the requested brand/cigarette strength for a customer when he or she requests it upon their next visit. This means a higher chance of a dissatisfied customer and the impending loss of business as a result. Others, including the Alliance themselves believe the legislation will be ineffective citing numerous international sources and the not-so convincing argument that “If customers can’t see the cigarettes because the display of them in Australian stores is also banned, how can plain packaging influence their decision to make the purchase or not.”

It seems a little preposterous, the arguments that the alliance has provided for opposing the new laws. To suggest that shop owners will take longer to identify a particular brand of cigarette when a customer asks for one might seem like the owner is forced to provide a negative customer experience by making the customer wait longer. But, I ask you, when was the last time you minded when you took a prescription to a pharmacist and you waited even as long as 10-15 minutes for the pharmacist to locate your particular drug, in it’s plain-packaged box, out of the other 1000s of drugs they have behind that mysterious counter? Have you not browsed the pharmacy shelves while you waited? Perhaps bought some headache tablets or band-aids that you didn’t think you needed until you saw them? Milk bars won’t vanish if owners take a little longer in identifying the cigarette packaging that a customer requested. Maybe it should be the societal norm to wait 10 minutes for your box of cigarettes so that you browse the snack foods and other impulse items moving you to leave with not only your cigarettes, but the latest issue of cosmo magazine, a stick of gum and a loaf of bread because you had the time to realize it was on special.

The second (among many) arguments that the Alliance uses is the fact that customers can’t see the store cigarette display anyway and so it would be a moot point to un-brand the packets; it’s likely to have little effect on their point-of-sale purchase decision right? Well, couldn’t this also be applied conversely; if they cannot see the packet anyway, why not un-brand it, that is, if you truly believe that branding and design plays no part in their decision making process at the point-of –sale. When you order a meal in a respected restaurant, you are not thinking about whether your spaghetti meatballs are being rolled in the kitchen by an Italian nonna with 30 years experience or whether the chef has taken them out of the freezer where they’ve been for the last 6 months, put them in the microwave for 3 minutes and served it to you on a large white plate. If you must have spaghetti meatballs, you’ll order them. The same goes with cigarettes.

It’s clear that the government’s new proposal has hit a nicotine-stained nerve. Those who have studied the affects of branding know how much value consumers put on the slogans on their t-shirts, the logos on the shoes they wear and yes, the colour of their cigarette packet they carry on them. If not, Naomi Klein’s book, “No Logo” is the quickest way to get educated on the subject. Will these new laws prevent current pack-a-day smokers from buying their next pack of cigarettes? Maybe not. Perhaps their buying decisions have gone beyond brand and are now simply a function of physiological need. Will it help a teenager whose anxious to form an identity, to be part of a group, re-consider cigarettes as their way in? Is it the act of smoking that’s cool? Or whether you smoke Winfield or Marlborough? There’s no doubt in my mind, and I’m sure the other designers’ minds out there, that unbranding cigarettes will make a dent in the perception of cigarettes as “cool”. Trying to reverse 80 years of marketing across 3 generations needs to start somewhere – I’m proud that it’s started here.

In my opinion, this legislation, first introduced to us by Prime Minister Kevin Rudd’s government might be the most influential legislation of our generation. It could be quite ironic that something so important was introduced in to our legal system by the shortest-serving PM in Australian political history. What’s inspiring for me is that these laws and the subsequent actions taken by the “Alliance” to oppose them are directly related to design. It highlights the importance of what we as designers know to be so very influential to a brand-consumer relationship. Sure, we’re discussing the eradication of design in this instance and I dare say, in many other circumstances, something like this would leave me disheartened and disappointed but, on the contrary, I find myself (and I believe any fellow graphic designer should be too) very much excited that our Australian government and the most profitable companies in the most dangerous industry in our country put so much value on design that one felt strongly enough to change the laws of our country to enforce it’s effect while the other so vehemently opposes it.

March 2011

Creative ideas when you least expect them

In a previous post, Creativity is not a personality trait but a moment in time, I discussed the Ted talk given by Ms Elizabeth Gilbert about the concept of a ‘genius’. It raised some very big questions in me about my own creative process and how and why I have these ‘moments of genius’ where an idea seems as though it’s presented itself on a silver platter with all the trimmings. I’ve started to track when and where I get hit by these lightning bolts; I’m not that surprised about the results:

My best ideas don’t happen at work.

Have you ever solved a problem that’s been bugging you for months – at a time when you least expected? In the shower? Staring mindlessly out of the train window? Running, walking, or doing some other repetitive exercise? This is when I have my ideas, and it turns out I’m not alone.

I’m sitting at my parents place as I write this, visiting for the weekend. Mum and I sat down to watch SBS, a documentary called “Finding my mind“. She was asleep within 15 minutes but the presenter, Professor at Oxford University for the Public Understanding of Science, Marcus du Sautoy, caught my attention immediately with his story about his PhD. To keep it short, Professor du Sautoy had been wrestling with a maths problem during his PhD for many months. He tells of everything he tried simply not working, this problem had him stumped. One day he was staring out of the train window, a trip he does almost every day, simply enjoying the scenery flying by as usual until all at once, he simply had the solution, it just ‘came to him’.

It reminded me of Cameron Moll’s talk that I watched a some time ago (I can’t find the link to it so I apologise). In it, he too speaks of those moments when you just have an idea that slaps you in the face, completely out of nowhere at a time where you’re thinking about, well, what you thought was nothing in particular. Mr Moll goes on to discuss preparations and measures he takes to capture that moment so it doesn’t escape. Moleskines, iPhones, audio note-takers etc. My favourite though has to be a diver’s slate in the shower, for those occasions where you may be shampooing your hair, letting the hot water run down your back and you’re suddenly presented with the meaning of life and just need to write it down.

The more you start to investigate this idea, that creative ideas or solutions to problems you’re facing simply come when you least expect it, the more examples you come across. The only way many of us can explain it is that “It just came to me,” these moments of… do we call them inspiration? Sparks? Lightning bolts? Genius? Does it even matter what label we give it? The fact is that this phenomenon has happened the world over for as long as we’ve been able to diarise it. Yes we can take personal measures to help us capture it but I started thinking a little more deeply about it. Design studios the world over, companies who sell creativity, don’t seem to create or provide the environment that fosters the creation of these sparks.

I can’t imagine telling my boss that I was taking the rest of the day off after lunch to go for a walk, maybe have a shower and forget about the brief we just spent 3 hours talking about with our new client. In fact, I can’t imagine any studio giving the freedom to their employees to do that. But, I’d love to know how often the best idea (or at least the most off-the-wall creative idea) comes as a result of a 3 hour meeting on a Thursday afternoon that runs late until 7:30pm.

Inspiring creativity, finding  the creative spark, hugging the genius, whatever we call it requires time and sometimes, simply some personal, quiet space where your brain can disengage. Even a repetitive task where your mind can begin to wander (as Mr Moll puts it) could be enough to do the trick. It seems to make sense when you think about the way Paul Rand discusses Wallas’ Art of Thought paper from the 1920s, his name for it was the ‘incubation period’.

The problem with design and working in design studios is that time is money. No client is going to pay you $180+ per hour for you to go for a run, have a shower and stare out of a train window as it heads out in to the country. The reality is, that’s when the best ideas are likely to happen. There’s evidence to prove it. I find it funny actually, the illusion of ‘getting work done.’ Everyone is happy, clients and art director’s alike, if we sit inside a cubicle or a space that’s considered more ‘creative’ (like a 33rd floor ‘open-plan’ office overlooking water) to produce ideas that ‘will do’. Of course the other option is to take a day or two out of the office, out and about, forgetting about the brief altogether, to produce ideas and work that is out of this world. Are we all just settling for OK work providing we appear to be working between the hours of 9-5, giving the illusion of value? If that’s the case, is it still the designer’s obligation to present ideas for studio work that may come to them outside of these regular office hours, when you inevitably get the creative spark for a project while chopping vegetables or playing golf? It seems as though OK is OK until something better comes along. If a better idea is sparked inside of you while you’re off the clock, art directors of design studios would expect to be presented with those ideas because it’s all the better for the success of the studio right? Not to mention the possible promotion you might get out of it? The illusion you’ll present as being “one of the most creative employees we have here at Studio X.” Do we hand over our off-the-clock ideas because creatives, by nature, love the ego boost we get when our creativity is confirmed to us from something or someone outside of our own heads?

The truth is, I love where I work. No, we don’t take showers to come up with creative solutions to client problems but we recognise that creativity doesn’t just happen, it takes time, incubation. Sure, there are still moments where we need to produce ideas and work on demand and to be honest, those projects never seem to make it on to the portfolio pages of the studio website. I do wonder though whether a radical shift in the way we do business with clients, a change of process to help create the utopian creative environments that time and time again produce sparks of creativity would actually produce innovative ideas that aren’t just OK but literally blow our minds.

January 2011

Creativity is a moment not a personality trait

This is a re-post of an article I wrote for the studio I currently work for. A recent comment sent me to a blog that discusses creativity. It inspired me to dig this up in response to their question “Is creativity a natural born talent and can it be acquired through hard work and perservance.”

I watched this TED talk about 6 months ago by Ms Elizabeth Gilbert on nurturing creativity; she poses the thought that rather than a person “being” a genius, all of us “have” a genius.

If you haven’t got 20 minutes to watch the video, the jist is…

A “Genius” as a 3rd party, disembodied from the human, a spirit-like presence that runs through the world, passing through each of us occasionally.

If you’ve got a little longer than a sentence and want my take…

When you think about it, it kind of makes sense, this ‘genius’ as an external force. No one is creative all the time, no one comes up with genius ideas everytime you pose a problem to them. Ms Gilbert speaks of a time where people were said to have geniuses (in the roman era) and it wasn’t until the industrial revolution when humans, so self-obsessed with their own accomplishments began calling themselves geniuses.

I love the idea of this parrallel world, a cosmic playground where ‘geniuses’ are climbing an invisible jungle gym, each one tied to a human with an elastic mystical string… our own, individual little genius. Ms Gilbert talks about the pressure that is put on artists and ‘creatives’ after they produce one idea that is deemed successful by the largest portion of a population. “That man is a genius!” has become a common term that is used so haphazardly that as soon as one ‘becomes’ a genius, they’re expected to stay one and keep producing innovative solutions to all of life’s problems. If they don’t, the natural human reaction is to think “Hmmm, maybe I’m over the hill, perhaps i’ve peaked and everything I do from now on will simply be second-rate”.

What we as creatives (nay, all people) need to do is befriend our genius. Understand that there is this external force, a ‘spark’ of creativity.  Looking at it this way, the way the Romans did, instantly takes pressure off a person – it turns creativity in to a moment, not a personality trait. Perhaps that moment is when our genius decides it’s time to give us a big hug. The question is how do we ask for that hug?

It’s an impossible question to answer in a single blog post because everyone is different. Perhaps ‘creatives’ or, more accurately, those in creative professions have learned how to ask for that hug in some way, they’ve managed to keep their genius close to them.

Since the talk by Ms Gilbert I’ve tried to identify what happens in my own creative process, to learn from my own actions – what tools do I turn to in the physical world that might help promote that hug from a genius in the meta-physical world? Sure there’s a lot of research out there on *how* to solve problems; mind-maps, brain-storming, pen and paper vs computer etc but what I want to know is what works for me.. what have I learned in my 6 years in a design profession, helping to promote communication visually and what can I do more of (or less of) to bring the genius closer to me; in short, I want more hugs! I hope to go in to this in more detail in future posts as I’ve only begun to uncover some of the methods I use naturally. Needless to say I *do* seem to have a ‘system’ and i’ve surprised myself with some of my methods.

Back to the point of this post though – to inspire those who believe they don’t have a creative bone in their body to take pressure off themselves – look at it from a different angle. Work out your own way of asking for a hug from your genius because you’ll surprise yourself by the sparks you’ll see when you ask the question in the right way. We can all benefit from the great ideas that are out there; we just need more people with the confidence to find them!

January 2011

User Experience designer: fact or fiction

The digital landscape has changed considerably in the past decade. As we grow to entangle ever more intricately our daily lives with this thing we call the internet there’s no doubt that it will continue it’s charge as a booming industry. It just so happens that with booming industries come leeches. The online digital interactive experience pie is a big one and in the last few years I’ve noticed a slight shift in whose eating at the dinner table. User experience designers seemed to have pulled up a chair, grabbed a knife and fork and started to feast but is their company in sharing the meal really worth the time and effort of the cooks and the clients who are preparing the pie?

A few weeks ago I started reading A project guide to Ux Design for user experience designers in the field or in the making by Russ Unger and Carolyn Chandler.  It took me approximately 15 pages to decide that this book wasn’t for me. I closed it, placed it back on the shelf and it’s now covered in 3 weeks of dust. Why? These few short pages actually made me question whether a ‘user experience designer’ is even a legitimate job in the online industry. Not a bad accomplishment for a book whose purpose is to help those who are trying to carve themselves a job by calling themselves “UX specialists”.

As I’ve said in a previous post, the online environment is now so big that the early 90s model of a ‘web designer’ no longer exists. There’s too much riding on any project now to have one person responsible for the database connectivity, the graphic design and front-end development and because of this we’ve seen job titles crop up over the past few years that have been further refining the existing roles and turning what was once an all-rounder in to a team of specialists. We now have 2 types of developers (front-end and back-end), a graphic designer (who doesn’t necessarily need any online experience simply to create graphics for web – this is a discussion for a whole other post). There’s the salesman/marketer whose job it is to get the work, the digital producer (or project manager), and assistants, directors and junior and senior positions across all of them, the list goes on. Of course, what happens when there’s no longer anything left is that people invent a term called “User Experience” and it provides an additional job for every online project that a studio wins.

What does this have to do with the book? Well, I’ve been struggling to see what value a user experience designer adds in to the project mix. For me, it was always a ‘too many cooks spoil the broth’ scenario. I thought this book would hold some magical secret to the meaning of life for a Ux Designer within the context of a project.

Mr Unger and Ms Chandler tell us that “to be successful, the user experience design must take in to account the business objectives, the needs of the users and any limitations that will affect it’s viability (technical, budget or time restraints).” Alarm bells should be ringing in every graphic designers head after reading that but I’ll come back to it. The book also tries to describe the personality of a Ux Designer, they use words like ‘curiosity’, ‘comfortable working with many shades of grey’ and, most importantly a Ux Designer has ‘empathy’. There’s the defining word, empathy. I noted down as I continued to read, no other role description mentioned in this book uses that word, empathy. Even in the “Other roles you may play or may need” section of the book where it describes the most common overlapping roles including:

  • Brand strategist
  • Business analyst
  • Content strategist
  • Copywriter
  • Visual designer
  • Front-end developer

None of these roles use the word empathy when the authors describe each position. In fact, they go as far as to degrade the ‘visual designer’ (apparently that’s what the Ux term is for a web designer now) simply to being “responsible for the elements of the site or application that a user sees. This effort includes designing a look and feel that creates an emotional connection with the user that’s in line with brand guidelines.”

Need I say HA? It was at this point that I closed the book and it all comes back to ‘empathy.’  Yes, I’m taking great pains over this word. The problem I see here is that this book has been published and released to a whole group of people who consider themselves Ux Designers or who at least want to pretend to be. It only takes 10 minutes of reading this book to find out that your job is the most important of them all because you’re the only one in the whole project team who thinks about the user. However, nowhere does it mention that the whole basis of successful graphic design is that the user is front of mind when you sit down to ‘create an emotional connection’. To reduce the role of visual design to simply ‘being what the user sees and making sure it adheres to brand guidelines’ shows a very strong misunderstanding of the role that the visual design plays in what is essentially a medium where sight is the only sense we have.

The internet is a visual and aural medium. Until we enter an age where we can change the surface texture of on-screen elements for a touch-screen (like making furry buttons) or invent online smell-o-vision, sight is the one sense that everyone using the web relies on. But, according to this book, the role of the person whose specialty is ‘the visual’ is simply and solely responsible for creating that emotional connection and adhering to a company’s brand guidelines. It makes no mention of the part that the visual designer plays in leading the eye of the user from start to finish across a web page? Of controlling the pacing of copy, of making sure the key messages are communicated with clarity through employing a successful visual hierarchy.

It all comes back to this description of a successful user experience design:

“to be successful, the user experience design must take in to account the business objectives, the needs of the users and any limitations that will affect its viability (technical, budget or time restraints).”

Apparently this is not the job of the visual designer? To be honest, I could not have summed up the role of the visual designer any more succinctly than that so the book does get a gold star for this, even though they didn’t mean to. I would love to ask the authors whether an Ux Designer, with no visual training, could successfully guide the user’s eye to call to actions like the monthly sale or the ‘register now’ button which may be the two key business objectives of a project. And what of the needs of the user? Can a person who lives only in the world of wireframes and business analysis make the judgement call on whether the body copy is too small, too light, not big enough? Can they judge the success of line spacing, paragraph spacing and heading weights in controlling the pacing of copy and how easy the page will be to digest for the user? Does an Ux Designer even know the technical limitation of a 16-column layout, screen-size restrictions for desktop vs. mobile applications, colour variations across monitors and how to design it so that the largest number of users have access to whatever you’re designing? These are the questions I, and I’m sure every other decent ‘web designer’, ‘interactive designer’, ‘visual designer’ whatever you want to name it ask themselves every time they need to weigh up the balance between business objectives, needs of users and technical limitations. If that’s what the role of a visual designer is, where does the Ux Designer fit in?

Ux Designer sounds professional and cutting-edge, it allows them (as well as big corporate organisation with some disposable income) to believe that they are. They’re able to charge exorbitant consultative fees to get a project to a point where it gets handed over to the visual designer who could, in the space of a few short days and some incorrect colour choices, completely reverse the work of the expensive, cutting-edge consultant by leading the users’ eye down the wrong path. Sure, the design may still ‘adhere to brand guidelines’ and it may ‘make an emotional connection with the user’ but if the key business objective is to make people click a ‘register now’ button and that register now button is not sufficiently distinguished from the rest of the elements in the interface so that the users are giving their immediate attention to the 32pt Arial Black Headline on the other side of the page first, it’s a very expensive fail.

It seems to me that, according to this very popular, supposedly useful and well-written guide book to the realm of Ux, the role of the Ux Designer in the online world is the same as a visual designer but the Ux Designer may not necessarily have the visual skills or the confidence/ability to successfully critique colour choice and decisions around visual hierarchy. If that’s the case, should a Ux designer need formal visual design training before they can call themselves a true Ux designer? Or is organizing the expensive exercise of post-design user testing enough to justify this role and decide whether or not the visual designer’s ideas have been successful? If a Ux designer has not had visual design experience, how much of the online pie should they really be getting?

November 2010

Graphic design in the ‘pokie room’

It’s a Saturday night. I go down to the local pub for a few quiet drinks with my brother who I haven’t seen for a couple of months because we live in different states. We’re sitting back, a cold drink in hand, talking about those things that begin to become important in the mid 20s; Employment, Real Estate, Finance – grown-up things. Me being the first one in my family to move out of the family home meant I hit a steep learning curve in the lessons of life; the stuff that school should be teaching children but doesn’t. When is the best time to buy a house? How much of the household budget needs to go towards savings? How much contingency does one need for insurance? Mum and Dad had to manage those important things, not me. But 3 years on I’ve got a wife, a house and a cat – life’s different and I can see how one needs to constantly manage all the pieces of one big jigsaw puzzle in order to make ends meet.

Sitting down together with a drink, I begin to try to educate my younger brother, whose been out of work now for 6 months thanks to the Global Financial Crisis and still living at home, on how much money one needs to survive; the day-to-day expenses. 25-30% of income goes towards the mortgage, another 25% on weekly bills and utilities; the rest of the income gets divided between savings and spendings. He seems to take notice until his friends arrive at our table and lure him away with the dream of winning $10,000 instantly on the same poker machine on which they won $300 last week for an outlay of only $10. He tells me he’ll be back soon but I know he won’t be. I sigh, and sit there alone, watching the condensation on the outside of my glass run on to the vinyl-covered chipboard table.

They’re called slot machines in America, fruits machines in Britain, and poker machines in Australia. I don’t play them but I know plenty of people that do. From the young ones, who have just turned 18 and are excited by the opportunity to now legally toss their money away; to the old ones who, despite a meager pension as their only source of income, still manage to save $10 for their Wednesday lunchtime at the R.S.L club where they can try their old, withered hands at finally cashing in on the big time. If they did win the jackpot, at least their sons and daughters wouldn’t have to pay for their funeral, right?

Pokies are an everyday part of life in Australian pubs and clubs across the nation. You can’t walk in to a leagues club or RSL without hearing the ever familiar tone of the poker machine pied piper; the one-armed bandit. Everyone knows the chances of winning on one of these machines and in Australia, it’s law to print those odds on the machine as an attempt to remind people that indeed, 999 999 times out of 1 million, the house will win.

There are of course dozens of reasons why people play these machines. Some call it fun and others simply can’t help it, it’s an addiction. I don’t know the exact stats on problem gambling but the fact that “Problem Gambling” is a common place phrase suggests to me that it’s a big issue – too big for me to fix anyway. Or is it? It got me thinking – how much of a role could design play in repelling people from the false hopes these machines provide?

When I went to check on my brother after finishing my drink I stepped in to the dimly lit ‘pokie room’ and my eyes did a quick scan. There were about 30 machines, 25 of them occupied. Not a bad turnover for a local, suburban Sydney pub. My brother is playing a machine called Queen of the Nile which I later found out is the most popular machine in Australia at the moment. It’s been in pubs and clubs for over 1 decade (since 1997) and is still top of the list! Why? Because it pays out more? I doubt it. Because it’s easier to use? It doesn’t appear to be any different from all the others – it has the same number and arrangement of buttons. Maybe it’s the sexy vector graphic of Cleopatra luring you towards her with the promise of fame and fortune. Could that be it? Her eyes are practically saying that all you have to do is pay her, push her buttons and you’ll be rewarded, kind of like a sex worker if you think about it.  Well, if you know the story of Cleopatra and how it ends for her, you probably wouldn’t be so easily lured.

What I found interesting about the poker machine room as I glanced around is how happy every machine looked and how unhappy every human-being looked. Not only was sexy Cleopatra bathing in the Egyptian sun by the Nile but next to her, her husband, the King of the Nile, was giving me an equally sexy smile from his own machine. A little further to the left there’s cartoon constructs like Mr. Cashman. Oh, you can trust him, a smiling gold coin wearing a top hat and immaculately clean white gloves, he won’t take your money. And animals?! What are the penguins of Penguin Pays or the lions 50 Lions going to do with your money? Penguins and Lions don’t need human money so they must be machines that do nothing but give it back to you right? I can’t understand the psychology here on face-value.

I’ve been lucky enough to visit Egypt and it’s not exactly a well-off city. People seem to work 24/7 just to have enough money to survive. Sometimes even that isn’t enough; and that’s no exaggeration. While I was there I saw a husband and wife team on the streets of Cairo change shifts in a tag team fashion at 2am. This was just so they could continue selling pirated DVDs in the off chance that a tourist or resident feels like picking up Transformers 4, six years before it’s even made in Hollywood, at the convenient time of 2 in the morning. What’s ironic is that such a poor nation is used as a marketing tool in 1st world countries with the intention of making the well-off poor! The Egypt I see in the poker machine room doesn’t nearly reflect the reality of the country but we as consumers, never really second guess that. I know my brother doesn’t. It’s just a pokie graphic right?

So is it OK for designers to create these happy, bouncy, mood-lifting graphics for a cause aimed at robbing the rich as well as the poor? When I began my design career I worked for a company that was responsible for creating the graphics for the poker machine giant Aristocrat. It sounded really fun; being able to create happy images – exciting, positive, cute illustrations in my first ever design studio. What I didn’t consider back then is how completely removed I would be from the reality of the context in which they’d be used. I look back at that time, a not very ancient 5 years ago and I’m now so happy that I had a falling out with the Director over a pay dispute within the first few weeks before I got the chance to contribute to the Aristocrat empire by way of my illustration skills. I’m sure back then, as my first design job, I would’ve been more than happy to get a few nice, professionally art-directed illustrations in to my portfolio.

In my opinion, the graphics used on poker machines to create a sense of hope, happiness and security are simply unethical; they’re lies. Humans though seem to be like moths to a light when it comes to poker machines. Why do we not see the reality of the affects of problem gambling plastered all over these fluorescent machines, these bug catchers. Why doesn’t Mr. Cashman have a frown instead of a smile? Maybe it could be renamed Mr. Trashman and feature a photograph of a homeless bum? Why isn’t the Queen of the Nile sitting on the sidewalk of a Cairo street, one arm missing, begging for a slice of bread? And Penguins?! The only thing a penguin should be paying from a poker machine is its respects to the player’s family who can’t afford to eat because their mum keeps giving the shifty-eyed animals her pay cheque every week.

It sounds preachy I know, and in a perfect world designers would mill about on the fringes of culturally significant, sustainable design projects never needing to contribute to the business of adding to an already significant problem in society. I understand those designers for Aristocrat still need money in their bank – and if Aristocrat designers are OK with trading their design skills for some problem gambler’s pay cheque, I have nothing against it; I was almost, unknowingly at the time, one of those designers.

I don’t pretend to have the solution to this. Designers need to design, even for poker machine giants (see “Design is dangerous in the hands of the unskilled”), and problem gamblers need as much assistance as they can to kick their addiction – maybe it can’t be a win:win for designers on this one. I don’t think Mr. Trashman would sit to well with the Aristocrat board as a new design concept. Perhaps it isn’t the graphics at all that make a person feel more comfortable spending their night dropping 20c coins in to a small slot? I know I’m repelled when I see the beckoning call of poker machine – but then again I’m quite cynical and perhaps my training as a designer has made me aware of its motives.

There’s no doubt in my mind that a more repellant, less welcoming approach to the graphics on these machines could at least be part of a multi-pronged attack to rid problem gambling from society. Stripping back colour, communicating the more likely scenario of loss rather than gain in a way that actually gets one’s attention in the same way that Cleopatra does would be a start.  Every now and again the issue of problem gambling rears its head in the Australian media and discussions arise. These discussions usually involve a government representative, a not-for-profit organization like Gambling Help online and pub or club owners across Australia. In fact, our recently hung Australian parliament came one step closer to being resolved because of a deal made between MPs to reduce problem gambling in our culture. But, if design can make even the smallest difference, which I’m convinced it can, then it should be our responsibility as designers (and the responsibility of the key decision-makers in our society) to be involved in this process of a public forum on problem gambling. We can use our training for the betterment of society and provide guidance on how to achieve the balancing act that keeps the Aristocrats in business, the publicans pouring ale and most importantly, food on the table for designers and their out of work, 20-something gambling brothers.

May 2010

The private approach to design education

The Faculty of Architecture at Sydney University was where the seeds of a love for design were planted but I didn’t know it right away. Spending 6 months analyzing the functions of a chair seemed hardly ‘design’ to me then. In fact, it was one of the most boring things I had ever done. I wanted to play with colour, line, type and texture – I wanted to create. But as I look back now, I see the wisdom of Dr. Mike Rosenman and Professor John Gero. How can one design something new without knowing why and how we’ve created what already exists?

A chair to me now is a fascinating object. The fact that for thousands of years humans have been trying to come to grips with the design problem of ‘the chair’ is mind-boggling. A chair in one context may not necessarily be as functional or as aesthetically pleasing as one in another. As far as I’m concerned the 3 years I spent learning about Architecture, Object Design, Multimedia, Information Systems and the human psychological response to everything was money and time well spent. It’s actually not long enough. I know now that a design education can hardly be confined to a finite time period – a good designer learns about design their whole lives. A good designer grows because of that.

With this firmly set in my head I was confident that I had finally discovered a small diamond of knowledge that I could keep with me in a velvet purse as I traversed the rocky path of my own design career. You can imagine my shock when I turned to an inside cover page of my wife’s latest Frankie magazine and find that a double-page spread is telling me “A world class design education needn’t take forever!” Apparently I could become “an immediately employable designer who has total confidence in my ability to take a brief, use the programs and meet the deadlines”. There was no chance I’d be left behind either because the college doing the advertising has “constantly evolving courses to keep in line with current common practices and design trends in the industry.”

Yes, it’s well written marketing material – you can jump on the conveyor belt, pay your $10 000 dollars, enter the big pretty box and come out the other side an accomplished, employable designer. Not only that, but Shillington College tells us if we want proof of their success, we simply need to look at the ‘high number of their graduates who attain high quality employment in the design industry – Saatchi & Saatchi, Leo Burnett, Frost, Interbrand and BMF’ are just a few of the design ‘studios’ where Shillington graduates now work.

I was really disappointed. If I had only known about this before I could have saved 2 years of my life, $6000 and be employed by a company like Clemenger BBDO! However, the sheer shock of this article led me to read it again, more thoroughly this time. As I scanned the page, line by line, the message became a little clearer –

  1. “A world class design education needn’t take forever!” or in other words… we have colleges in more than one country (that’s the world class bit). Asky yourself, how could we have expanded in to other countries if we didn’t do what we say we do really well. Our courses don’t go for longer than a year which will lead you to believe that they don’t cost as much as a university or other private educator – so we’re hoping that will prompt you to call us.
  2. “train students to become immediately employable designers who have total confidence in taking a brief, using the programs and meeting deadlines” became: We don’t say we get you a job because we’re not a recruiter or an IT education institution. All that we’re saying here is that if the big agencies are using Adobe CS4 then we’ll teach you how to open it, move layers around and save a file for web. That’s all you’ll need to know when you’re working 9am-1am on banner ads for a big agency’s corporate client brand rollout that was poorly managed so everyone has to work back for the week.
  3. “Constantly evolving courses to keep in line with current common practices and design trends in the industry” was interpreted to say:
    What’s that? Vince Frost is using CS5 to apply swirly vectors or neon shapes to flash banner ads now? Quick, let’s buy some licenses and show people the new features – there’s a really cool automatic swirl brush in illustrator that will save some time. And of course, last but not least, Shillington has a:
  4. “High number of graduates who attain high quality employment in the design industry – Saatchi & Saatchi, Leo Burnett, Frost etc”, or put in another, more accurate way… We prepare you to become a Mac Operator for the first 4 years of your design career. You’ll have all the perks that come with working with a big agency – Working on your own in a corner pumping out varying sizes of static banner ads that someone else gave you the creative for. You’ll have the pleasure of working back late to get a job out on time – that means having dinner and breakfast at work. You’ll get to play on the company pool table or the company Wii console at lunch instead of leaving the office. This is just in case a job comes in and your 1 hour lunch break means that the client needs to wait until you return to get their iteration back to them before they change it 60 times anyway before it gets approved.

It probably sounds like I’m a disgruntled ex-university student who simply paid too much time and money to become a designer and if that’s your impression, then please pay for Shillington’s course and visit me in 2 years from now to tell me how well-rounded (and happy) a designer you are. I’d love to hear your success story.

After reading Shillington’s double page ad I couldn’t help but be reminded in some way of Michael Beirut’s essay, “Why Designers can’t think” on the difference between the Swiss and American approach to graphic design education. He discusses the Swiss approach to be one of theory before practice. You have to know why you would use Helvetica rather than Univers for a piece of design, you spend your time exploring Gestalt principles and completing simple exercises that have little or no ‘real-world’ reflection. The American approach on the other hand seems to be portfolio focused; the mentality where replicating current design trends to create assignments is the goal. Of course, the idea behind it is what you end up with is a portfolio whose author any studio would be glad to have as a team member – it shows you can ‘design’ right?

Are we witnessing Mr Beirut’s “Portfolio vs Process” internally here in Australia, between University and Private schools? And where does our TAFE system fit in the mix. Having gone through the process system I’m unsure how anyone can call themselves a graphic designer if they simply know the programs.  What’s worse is that you can pay a yearly fee of a couple hundred dollars to Lynda.com and get a wider breadth of program tutorials at a fraction of the cost! Knowing how to use the technology doesn’t make you a designer – it makes you a Mac Operator. To me a designer knows the why, not just the how.

With all of this in mind, I find it disheartening to hear that private colleges who seem to favour the “portfolio” focus over the “process” are growing. They’re slowly moving in to each major city touting that theirs is the course that will make a student the next Ken Cato of the Australian design industry. What does it mean for the quality of Australian design if in 100 years from now the majority of our graphic designers are knowledgable only of the how and not the why? Or is the why something we’re supposed to learn throughout our careers? If big agencies are now happy to employ those who know the ‘how’, then who will be left with the know-how of the why to continue making meaninful, intelligent and successful graphic design in this country?

April 2010

The delicate task of design

I’ve been reading “Do Good Design” by David B. Berman. It’s a short book, a mere 100 pages or so and has lots of examples of design that Mr Berman believes, could be done better. I don’t mean better in the sense of adhering to the principles of graphic design better, better as in better for the world, for the global conscience. I haven’t gotten very far in the book yet but one particular article has already struck a chord; graphic designers in the sex industry.

In my last post about design irresponsibility and the Australian Sexpo Exhibition I posted a re-design of the Feb 2008 playboy magazine cover. I tried to make the point that designers with a little flair of creativity can promote these sorts of magazines with messages that bypass the young mind of a child whilst speaking loud and clear (as well as boosting sales) for the pornography moguls. I was so caught up in graphics that I didn’t even pause to think whether the peddling of pornography by a graphic designer is ‘right’. Mr Berman raises the issue only briefly but it’s one that none the less, needs to be discussed. When is it ‘wrong’ for a graphic designer to peddle a product or service that might not be considered morally just or perhaps be having a negative effect on society?

Neither I nor David Berman are the first to put the idea out there I’m sure. Yes, there’s sex, lies and violence in today’s world and yes, they are commodities. That’s simply a fact. The question is, are graphic designers who agree to work on the design of advertorial material for these industries lesser designers than those who work for not-for-profit or education sectors? Are they contributing to the ever-decreasing age of sex and drug related criminals? Would the world be a better place if designers kept their crafty fingers and creative ideas from these filthy industries?

In tossing these questions around in my mind, I found them difficult to answer. Graphic design in the sex industry for example seems so formulaic so maybe those designers aren’t as talented as others; or is the work they produce just what the client demands because it worked last time? Perhaps the explicitly violent posters for movies starring the likes of gangster role models 50 (or is it fitty) cent and Eminem are indirectly getting guns in the hands of minors sooner. Would the world be a better place if designers weren’t in these industries? I’m not so sure.

The reality is that graphic designers are skilled communicators, we’re experts in type, colour, layout and more deeply than that, we understand the human psychology behind these elements in triggering and re-triggering an emotional response. What do you think you would be exposed to if graphic designers did not play a part in the images we see?

It’s true, there are some designers out there who don’t seem to have a moral conscience. And if the price is high enough we can all convince ourselves that we’ll do a good, clean job. But on the whole I refuse to believe that a trained, professional designer consciously chooses to promote sex and violence recklessly, just for the sake of it.

If graphic designers across the globe one day got together and decided, “We’re finished! We will no longer be the person who creates the graphics for these industries! Everyone says it’s our fault that kids are getting pregnant at the age when they experience their first  period and the local high-school kids are too afraid to go back to school in case Jimmy shows up with a semi-automatic.” – not only would their be a spike in homelessness from all the designers no longer being able to afford their rent but I dare suggest that incidence of violence and teen pregnancy would increase. My point being that putting the delicate task of design in to the hands of the un-trained is more dangerous. Yes, it’s comparative, subjective and down-right speculative to suggest it. If we took the scenario out of the design world and say, in to the world of home maintenance, one could broaden the argument and question whether or not you would be happy to pay a much smaller fee to get an untrained person to wire your house – lights, appliances, ceiling fans, everything, instead of engaging the skills of a licensed, professional electrician.

Graphic designers need to work in all industries, from the morally reprehensible to the good, socially beneficial. Without these people, or should I say “us”, surely the alternative, which is almost unimaginable to me, would be worse. What I’m trying to say is that designers are simply the licensed tradesmen of communication. What I love about design is that we’re needed in every industry – one would imagine this to mean that design is a flourishing industry where once you’ve got your qualifications you’ll never go hungry again because there’s just so much work but alas, it’s a different story and one better elaborated on in a different article.

The powers which graphic designers have are indeed often under-estimated; by the clients as well as the designers themselves. I believe the designer needs to have the self-confidence to get behind their own ideas, throw some creative solutions at their clients and don’t be afraid to explore options beyond the formula. Next time you design a poster with 50 cent holding a gun and someone critcises you for spreading the message that violence is ok, perhaps argue that industrial designers are to blame. If guns weren’t so instinctive to hold, so easy to operate, perhaps only it’s target demographic would use them leaving the hands of children and teens clutching lollipops and ice-cream cones instead.

April 2010

Designing the adult world with children in mind

Why do all of our men’s magazines look the same? Penthouse, Playboy… the only way to tell the difference these days is the masthead. What if you got rid of the girl on the front-cover just once and used some creative graphic design? Would the crowds flock to to buy the magazine? Or would it spell economic disaster for a pornography giant?

A few weeks ago the Brisbane Times reported on a community furore over a billboard for the adults-only conference, Sexpo. Sexpo is, according to their website a “sexuality lifestyle” expo. It boasts its ability to provide access to international performers from all aspects of adult entertainment and its ‘aim’ is to “provide a fun and vibrant atmosphere for like-minded people to enjoy and find information about all things adult.”

Now, as a twenty-something male, this should sound pretty exciting; porn-stars and adult toys, side-by-side, all under one roof. The crazy fetishes, the on-stage entertainment, I mean, if you have a look at some of the event photos it’s pretty clear that the majority of attendees fit my demographic. But, to be brutally honest, it sounds a little boring and just a bit dirty to me. I’m no fun sponge though, if adults are keen to explore this sort of exhibition as a way to communicate or interact with others about their sexuality then by all means, go nuts and pardon the pun.

As a designer, what I do find irresponsible is the complete disregard for the broader social context in advertising this event. There has been a lot of media attention and community debate about the recent design and placement of the billboards used to promote the event in Queensland. Despite all the controversy around the location and contents of the ad, it essentially boils down to one thing; irresponsible graphic design.

The simple fact is that, as a society, we’re being desensitized. The design of sexually-targeted material even as little as 10 years ago, leaves today’s young men flaccid, bored and keen to bid on the next young girl willing to put aside her morals and auction her virginity on eBay.

What concerns me is not knowing where it will stop. In 10 years time, will the societal norm become one where graphic designers will be able to show a porn-stars’ nipples in an ad campaign… without covering them with stars or little vector love hearts? I mean, they’re porn stars right? Why censor their god-given anatomy in a still image if they’re plastered all over the internet where society can freely access them anyway?

Or what about just the female form in its entirety? Will it be considered ‘acceptable’ by generation Y to paste up this level of gratuitious nudity in local newsagents to advertise the new issue of whichever men’s magazine survives the ever-growing production costs of the printed page? Will the children of the future simply get bored with full-frontal nudity and turn to genome projects to create something of a Total Recall-esque woman (the three-breasted lady) in order to get enough stimulation to enable them to continue to procreate?

I’m an old fashioned guy, I still believe that a little mystery goes a long way in providing a provocative thought or creating a mental image. There’s a reason why the human genetalia (apart from maybe the Statue of David) doesn’t come to mind when discussing art around a dinner table; at least not as often as the suggestive smile of an eyebrow-less lady or the stare of a girl who might don a simple pearl-earring. If we are being desensitized and the next step in the advertising evolution is to drop all barriers as well as our underpants and admire the human figure in all it’s hairy, sweaty glory; why not start now and why not do it creatively, setting a trend for how we should approach this for future generations. I believe we can shift this trend for the better, but we need to act sooner rather than later.

After my most recent post on designing for the broader social context I decided to set a challenge for myself, to redesign the sexually-explicit imagery featured in adults’ entertainment. And what other place should I choose to start but of course, Playboy magazine; the icon of adult entertainment – bunny and all. Could it actually be possible to design a cover for one of these magazines that, to a child, could seem perfectly innocent but to an adult actually communicate a completely different and raunchy message?

The first step in this exercise was to find a recent Playboy cover; funnily enough, this isn’t very hard. (There’s just too much scope for pun’s in this article).

Image of Playboy cover 2008

In my research, it was pretty easy to see that historically, these designs don’t change very much and they’re pretty common across many of the men’s magazines that have been (and still are being) published. One could argue that this is probably driven by Playboy and its pioneering of what is at the forefront of sexuality advertising over the last 50 years. After analyzing a few examples, I found that they often contain 3 key features:

  1. Large masthead to identify to the brand of the magazine and the date of the issue.
  2. Photograph of a woman, either in a strategically nude position or dressed in what is essentially an embarrassing, fantastical costume. Often accompany the image is a caption to identify the woman in case you didn’t recognize her from the latest new release when you last browsed the adult video store.
  3. A list of the contents of the magazine; feature articles and suggestive headings, etc.

Now I’m not sure about you but a parent who happens to be browsing their local magazine rack with their 5 year old child in hand can’t control what their child’s little, over-stimulated brain absorbs. And this sort of imagery is certainly something I wouldn’t be keen to have my child subjected to at any level. I mean, is Tiffany Fallon dressed as Wonder Woman really something I want little 5 year old Diedre to use as a role model?

This seems to be the current state of play though and thus form the basis of my challenge: What can I say to Diedre’s dad that will simply fly straight over the little girl’s head without doing any damage to her impression of how adult women should act or dress (or undress in this case). Of course, good design is also commercially successful so I still need to get the attention of her dad too because he’s our key demographic here as he browses fishing or motorsport magazines, and what better way to do this than with the current societal taboo – a reference to human genatalia.

Illustration of Playboy cover design

And so you can view them side by side:

The real playboy vs the fake

I designed this using the copy from the cover of the Feb 2008 issue and overlayed my own creativity to replace the feature image and caption. I didn’t want to show or tell my wife until I was satisfied I had fulfilled my own brief.

Her immediate reaction upon seeing it was in fact shock, but I wasn’t surprised. “That’s wrong” she said, but she chuckled immediately and almost ashamedly to herself. What was interesting was the longer her eyes devoured the cover, the less wrong it became. She could imagine telling little Diedre that it was a gardening or cooking magazine while my wife knew full well what the cover was alluding to. It’s intention of course is to use humour and simple graphic design to communicate directly to the target audience without providing other, more damaging messages to the audiences who are not considered the key demographic but whom still are exposed to the work – exactly what a good piece of design is supposed to do.

If you’ve managed to find this post, I’d love to hear your thoughts. Please tell me what you think of this; whether it is wrong in your opinion or whether you’d be happy for your children, none the wiser to the subtleties of this communicative attempt of the female form, to see this in their local newsagent while daddy browses the motor car section right next to where they keep the adult magazines.

Of course, any design needs to fulfill the commercial aspect of the brief – “Will people buy it over its neighbouring monthly issue of Penthouse.” Well, in my opinion, I really believe that if Playboy ran their March cover with something like this on it, sales would not diminish. In fact, they might even get a few more out of sheer curiosity. Humour is a powerful tool in the designer’s arsenal. This magazine would surely jump off the shelves, the contrast of it to the sea of familiar faces of the elite adult entertainment world that surrounds it would surely see this as a viable commercial option for an adult entertainment publishing giant like Playboy. And if it doesn’t work? Well, I hardly think it would make a dint in Playboy’s profitability as the next month’s issue is just 30 days away. So why don’t these companies use their brands for good and give it a try?!

I won’t lie, I found this exercise really fun and I plan to try a few more of these in the near future with a particular focus on the horror film genre. If you’d like to see a re-design of a particularly scary or sexually-explicit movie, please let me know.

April 2010

Are computers getting in the way of creativity?

After reading the slender, 96 page book, “Paul Rand: Conversations with Students” by Michael Kroeger, I find myself stunned at the concise rantings of one of our industries most famous modernists.

Among the many ideas that Mr. Rand presents to his ‘students’ which include well-respected educators in graphic design from institutions across America and the globe, I found one in particular that struck a chord with me; the creative process. Mr. Rand makes reference to a book written in 1926 by Graham Wallas called “Art of Thought”; a book which presents one of the first models of the creative process. He [Wallas] outlines 4 stages:

Preparation: This is the stage where you clear your head of all the other millions of things you’re thinking about; what you intend to do on the weekend, getting the gas hot water heater serviced, thinking about picking up the kids etc. It involves what we now call ‘brainstorming’, exploring every facet of the problem and coming up with ideas, good or bad, about ways in which we can solve certain dimensions of that problem. Then you leave it.

Incubation: Or as Paul Rand puts it, forgetting about the problem. Go about your daily life for a day, 3 days or even a week until of course you find yourself at stage 3.

Revelation: That ‘spark’ where the solution or “the real problem” simply reveals itself. This is the fun bit! You think you have ‘the’ idea. But, is it the idea? That’s where the next stage is critical

Evaluation: Where you ask yourself, the client, your friends and your family whether they agree that your solution solves the problem. You re-iterate until all are happy.

I should make it clear that I haven’t actually read the original publication “Art of Thought” so perhaps there’s a little bit of ‘chinese-whispers’ syndrome with the above steps, including Rand’s interpretation as well as the research I’ve done on the internet around this book. At $3500 for a copy of it on Amazon, I think hearsay is as close as I’ll get to reading it.

I often have conversations with my wife, a graphic designer also, about sometimes not being able to come up with an idea (let alone ‘the idea’) for a design problem at our creative director’s request. It’s hard work! You get a dodgy brief or some inarticulate direction and they leave you saying “Let’s catch up at the end of the day to see what you come up with.” What then happens is 6 hours of ‘trying things’ in Photoshop or in a sketchbook only to realise that the end result is ‘un-refined’ or you’ve gone in the wrong direction. I’m sure my wife and I aren’t the only graphic designers in the world who struggle with this sort of approach to design.

What I have found though is often my best work comes from when I get told about a job a few days in advance, whether it’s a brand or some little widget we need to create for a website. When the design problem has time to incubate.

What I believe we’re experiencing today is communication at a speed that’s hindering proper (or at least exploratory) levels of creativity. People expect results ‘at the end of the day’ or ‘in an hour’ or ‘within 15 minutes’. Perhaps this is a bigger problem then just creative thought too. Perhaps, as a society we’re simply becoming more impatient? Our expectation of when we should have things (and how we get them) is becoming more demanding. Is the global financial crisis a result of this increased impatience too? People living beyond their means, wanting houses and products now, not later?

Technology’s exponential growth seems to have instilled a fear in business, a fear of being left behind. Yes, design studios are not immune to this either and so it follows that with tools like the computer, like Photoshop, an expectation has been set amongst clients that with technology it is really quick and easy to produce ‘design’. Well, it is – there’s no longer days between design paste-ups, cutting stencils and playing with different materials like cellophane etc. But, isn’t this essentially the ‘production’ phase? Don’t we need to step back from the computer for a moment and focus on the idea? Solving the problem?

Art of Thought focuses on the creative process from a cognitive perspective; it’s simply ‘how our brains work’. It’s science. And although it was written almost 90 years ago, do design studios truly believe that the work they did in the 24 hours after getting a brief from a client is the best work they’re capable of? Or do they simply have no more time because of budgets and deadlines. Our processes have evolved to business demands, our brains haven’t.

In my opinion, a designer is actually working 24 hours, 7 days a week. With inspiration able to come from browsing the net whilst sitting at your computer equally as much as walking down the street and catching a glimpse of the shape of the wing of a pigeon – a keen designer-eye is never really ‘switched off.’ But how do you charge for that? What we’re essentially talking about here is the work of the sub-conscious. The problem is tattooed on to our brain and although when we leave work and ‘clock-off’ for the day, we can’t control those little synapses who are waiting for a sensory cue to make a connection between two loose wires that never thought about touching before and suddenly it’s an idea, ‘the’ idea.

By acknowledging this biological certainty we can begin to set realistic expectations about things we can control like timelines and budget. I believe that by stepping back from the speed that technology is allowing us to produce iterations of a design solution and taking advantage of what we know about the creative process and how our brains biologically work, we can improve the quality of work and levels of creativity and innovation that our studios produce. Surely no client can argue that getting a better idea, a more fool-proof solution to their design problem is not worth a little more time without perhaps, any extra cost.