[06-Apr-2026 16:17:59 UTC] PHP Warning: Use of undefined constant ABSPATH - assumed 'ABSPATH' (this will throw an Error in a future version of PHP) in /home/gordonhpcs/public_html/wp-content/plugins/thrive-visual-editor/thrive-dashboard/inc/app-notification/classes/DbMigration.php on line 2 [06-Apr-2026 16:17:59 UTC] PHP Warning: require_once(ABSPATHwp-admin/includes/upgrade.php): failed to open stream: No such file or directory in /home/gordonhpcs/public_html/wp-content/plugins/thrive-visual-editor/thrive-dashboard/inc/app-notification/classes/DbMigration.php on line 2 [06-Apr-2026 16:17:59 UTC] PHP Fatal error: require_once(): Failed opening required 'ABSPATHwp-admin/includes/upgrade.php' (include_path='.:/opt/cpanel/ea-php74/root/usr/share/pear') in /home/gordonhpcs/public_html/wp-content/plugins/thrive-visual-editor/thrive-dashboard/inc/app-notification/classes/DbMigration.php on line 2
These special sweet treats are perfect for sharing with loved ones.
It’s that time of year again, folks. The time where you always seem to ask yourself: Why are my allergies so bad right now? The itchy, sneezy, watery-eyed mess you become with the change of the seasons is often tied to whatever is blooming outdoors. In spring and fall, for example, pollen and ragweed counts are high, which can make your allergy symptoms worse during these times of year. But what does it mean if your allergies feel worse this year than ever before?
Truth is, allergy symptoms can vary a lot from one year to the next, says Jen Caudle, DO, a family medicine practitioner in Washington Township, New Jersey. A lot of it has to do with weather-related stuff, but lifestyle habits can also play a role, too.
Here, we’ll dive into the most common allergy culprits right now, and what to do to feel your best—no matter the pollen count.
Beyond what’s in bloom, there are other factors that can contribute to worsening allergy symptoms. This includes:
“Seasonal allergy symptoms tend to flare up in the spring and the fall,” Dr. Caudle says. That said, not every single day throughout a given season will feel the same. Shifting weather and other factors can change how much of an allergen is in your environment, which can affect your symptoms, per the American College of Allergy, Asthma, & Immunology (ACAAI). Some of these factors can include the following:
Exposure to things like smoke or strong odors might set off your symptoms or make existing symptoms worse, Dr. Caudle says. Same goes for whatever’s happening in your indoor environment. For example, if you normally shut the windows but they’ve been open all day, there’s a good chance that allergens from outside will have wafted into your living space, which can exacerbate your symptoms.
Some allergy seasons are just worse than others, TBH. It all comes down to whether it’s warm or cool, dry or rainy, and so on. If the winter’s been pretty mild, you can bet that plants will start pollinating earlier, which translates to spring symptoms flaring up sooner.
Rain in the spring isn’t great for allergies either. It encourages plants to sprout up bigger and faster and encourages the growth of mold, both of which can make symptoms more intense, the ACAAI notes. In other words, this could be why your allergies are so bad all of a sudden.
It’s not your imagination: Allergy seasons are generally more hellish today than they used to be. “Seasonal allergies are most commonly caused by pollen,” Dr. Caudle says. And the changing climate has led to pollen seasons starting 20 days earlier, lasting 10 days longer, and featuring 21 percent more pollen in recent years compared to 1990, according to the National Institute of Food and Agriculture.
In other words, the more our climate fluctuates over the years, the worse (and more unpredictable) allergy seasons may become.
Certain habits you might not think twice about can affect your allergen exposure, and therefore, the severity of your symptoms. Some common ones include the following, per the Mayo Clinic and the American Academy of Allergy Asthma & Immunology (AAAAI):
A lot of things about allergy season are beyond your control (like, you can’t change the weather). But there’s still a lot you can do to keep your symptoms in check. “When I am treating my patient’s allergy symptoms in the office, I not only discuss appropriate medication options, but I also discuss lifestyle changes that will benefit them as well,” Dr. Caudle says. Here are a few:
Avoiding your allergen as much as you can is your best bet, Dr. Caudle notes. But when you’re dealing with environmental offenders like pollen or mold, it can be tough to steer clear 100 percent of the time. In that case, these tips can help, per the Mayo Clinic and the AAAI:
Another tip? Drinking coffee for seasonal allergies (especially in the morning alongside your antihistamines) could help alleviate symptoms like congestion, Purvi Parikh, MD, an allergist with the Allergy & Asthma Network, previously told Well+Good.
When lifestyle changes aren’t doing enough on their own, over-the-counter (OTC) or prescription allergy meds can help. Your primary care doctor can help you decide which med is right for you based on your symptoms and how severe they are. Some of the best medicine for allergies includes the following, per the Mayo Clinic:
Figuring out what helps with allergies may require a trip to your doctor. You can start by seeing your primary care doctor, who may refer you to an allergist if your symptoms are severe. An allergist can do a quick skin test to confirm what you’re actually allergic to and offer additional treatment options like allergy shots, which can make you less sensitive to your allergen over time.
You’ll get the best possible handle on your allergies by seeing an allergist, Dr. Caudle says. “You can get a proper diagnosis for your symptoms and then follow any treatment and prevention recommendations from your doctor.”
Usually, the right treatment plan should help you sidestep the worst symptoms, so your allergies are more of an occasional annoyance rather than an everyday-soul-sucker.
Honestly, why wait? If your allergies are annoying and you’re having any kind of trouble managing them on your own, make an appointment to see your doctor. They can confirm that you’re dealing with an allergy (as opposed to another health issue, like asthma) and help you come up with a treatment plan to help you feel your best.
And of course, if you have other symptoms like fever, sore throat, or nausea and vomiting, you may actually have an infection rather than allergies, which might need to be treated with antibiotics.
A common myth about allergies is that they get worse with age. But it really all depends on the person, and how severe their allergies were to begin with. Some people may notice their allergies improve after a certain age. Others, however, may get hit with new or worsening allergy symptoms as they get older, due to immune system changes, according to an April 2017 paper in Aging and Disease.
There’s definitely a lot of overlap between allergies and a cold or the flu: All can cause congestion, a runny nose, coughing, and generally leave you feeling lousy. Two key differences? “You can have a fever or body aches with a cold or the flu but not with allergies,” Dr. Caudle says. If you have a fever, your body may be trying to fight off an infection or virus.
May is typically peak pollen season across the U.S., according to the Allergy & Asthma Network. But there can be some variation based on where you live, with warmer climates potentially peaking earlier and cooler climates peaking later. Some people may even get summer allergies or fall allergies, as temperatures fluctuate and different plants start to bloom.
We break down the research behind the ingredients.
It’s that time of year again, folks. The time where you always seem to ask yourself: Why are my allergies so bad right now? The itchy, sneezy, watery-eyed mess you become with the change of the seasons is often tied to whatever is blooming outdoors. In spring and fall, for example, pollen and ragweed counts are high, which can make your allergy symptoms worse during these times of year. But what does it mean if your allergies feel worse this year than ever before?
Truth is, allergy symptoms can vary a lot from one year to the next, says Jen Caudle, DO, a family medicine practitioner in Washington Township, New Jersey. A lot of it has to do with weather-related stuff, but lifestyle habits can also play a role, too.
Here, we’ll dive into the most common allergy culprits right now, and what to do to feel your best—no matter the pollen count.
Beyond what’s in bloom, there are other factors that can contribute to worsening allergy symptoms. This includes:
“Seasonal allergy symptoms tend to flare up in the spring and the fall,” Dr. Caudle says. That said, not every single day throughout a given season will feel the same. Shifting weather and other factors can change how much of an allergen is in your environment, which can affect your symptoms, per the American College of Allergy, Asthma, & Immunology (ACAAI). Some of these factors can include the following:
Exposure to things like smoke or strong odors might set off your symptoms or make existing symptoms worse, Dr. Caudle says. Same goes for whatever’s happening in your indoor environment. For example, if you normally shut the windows but they’ve been open all day, there’s a good chance that allergens from outside will have wafted into your living space, which can exacerbate your symptoms.
Some allergy seasons are just worse than others, TBH. It all comes down to whether it’s warm or cool, dry or rainy, and so on. If the winter’s been pretty mild, you can bet that plants will start pollinating earlier, which translates to spring symptoms flaring up sooner.
Rain in the spring isn’t great for allergies either. It encourages plants to sprout up bigger and faster and encourages the growth of mold, both of which can make symptoms more intense, the ACAAI notes. In other words, this could be why your allergies are so bad all of a sudden.
It’s not your imagination: Allergy seasons are generally more hellish today than they used to be. “Seasonal allergies are most commonly caused by pollen,” Dr. Caudle says. And the changing climate has led to pollen seasons starting 20 days earlier, lasting 10 days longer, and featuring 21 percent more pollen in recent years compared to 1990, according to the National Institute of Food and Agriculture.
In other words, the more our climate fluctuates over the years, the worse (and more unpredictable) allergy seasons may become.
Certain habits you might not think twice about can affect your allergen exposure, and therefore, the severity of your symptoms. Some common ones include the following, per the Mayo Clinic and the American Academy of Allergy Asthma & Immunology (AAAAI):
A lot of things about allergy season are beyond your control (like, you can’t change the weather). But there’s still a lot you can do to keep your symptoms in check. “When I am treating my patient’s allergy symptoms in the office, I not only discuss appropriate medication options, but I also discuss lifestyle changes that will benefit them as well,” Dr. Caudle says. Here are a few:
Avoiding your allergen as much as you can is your best bet, Dr. Caudle notes. But when you’re dealing with environmental offenders like pollen or mold, it can be tough to steer clear 100 percent of the time. In that case, these tips can help, per the Mayo Clinic and the AAAI:
Another tip? Drinking coffee for seasonal allergies (especially in the morning alongside your antihistamines) could help alleviate symptoms like congestion, Purvi Parikh, MD, an allergist with the Allergy & Asthma Network, previously told Well+Good.
When lifestyle changes aren’t doing enough on their own, over-the-counter (OTC) or prescription allergy meds can help. Your primary care doctor can help you decide which med is right for you based on your symptoms and how severe they are. Some of the best medicine for allergies includes the following, per the Mayo Clinic:
Figuring out what helps with allergies may require a trip to your doctor. You can start by seeing your primary care doctor, who may refer you to an allergist if your symptoms are severe. An allergist can do a quick skin test to confirm what you’re actually allergic to and offer additional treatment options like allergy shots, which can make you less sensitive to your allergen over time.
You’ll get the best possible handle on your allergies by seeing an allergist, Dr. Caudle says. “You can get a proper diagnosis for your symptoms and then follow any treatment and prevention recommendations from your doctor.”
Usually, the right treatment plan should help you sidestep the worst symptoms, so your allergies are more of an occasional annoyance rather than an everyday-soul-sucker.
Honestly, why wait? If your allergies are annoying and you’re having any kind of trouble managing them on your own, make an appointment to see your doctor. They can confirm that you’re dealing with an allergy (as opposed to another health issue, like asthma) and help you come up with a treatment plan to help you feel your best.
And of course, if you have other symptoms like fever, sore throat, or nausea and vomiting, you may actually have an infection rather than allergies, which might need to be treated with antibiotics.
A common myth about allergies is that they get worse with age. But it really all depends on the person, and how severe their allergies were to begin with. Some people may notice their allergies improve after a certain age. Others, however, may get hit with new or worsening allergy symptoms as they get older, due to immune system changes, according to an April 2017 paper in Aging and Disease.
There’s definitely a lot of overlap between allergies and a cold or the flu: All can cause congestion, a runny nose, coughing, and generally leave you feeling lousy. Two key differences? “You can have a fever or body aches with a cold or the flu but not with allergies,” Dr. Caudle says. If you have a fever, your body may be trying to fight off an infection or virus.
May is typically peak pollen season across the U.S., according to the Allergy & Asthma Network. But there can be some variation based on where you live, with warmer climates potentially peaking earlier and cooler climates peaking later. Some people may even get summer allergies or fall allergies, as temperatures fluctuate and different plants start to bloom.
Less is more.
It’s that time of year again, folks. The time where you always seem to ask yourself: Why are my allergies so bad right now? The itchy, sneezy, watery-eyed mess you become with the change of the seasons is often tied to whatever is blooming outdoors. In spring and fall, for example, pollen and ragweed counts are high, which can make your allergy symptoms worse during these times of year. But what does it mean if your allergies feel worse this year than ever before?
Truth is, allergy symptoms can vary a lot from one year to the next, says Jen Caudle, DO, a family medicine practitioner in Washington Township, New Jersey. A lot of it has to do with weather-related stuff, but lifestyle habits can also play a role, too.
Here, we’ll dive into the most common allergy culprits right now, and what to do to feel your best—no matter the pollen count.
Beyond what’s in bloom, there are other factors that can contribute to worsening allergy symptoms. This includes:
“Seasonal allergy symptoms tend to flare up in the spring and the fall,” Dr. Caudle says. That said, not every single day throughout a given season will feel the same. Shifting weather and other factors can change how much of an allergen is in your environment, which can affect your symptoms, per the American College of Allergy, Asthma, & Immunology (ACAAI). Some of these factors can include the following:
Exposure to things like smoke or strong odors might set off your symptoms or make existing symptoms worse, Dr. Caudle says. Same goes for whatever’s happening in your indoor environment. For example, if you normally shut the windows but they’ve been open all day, there’s a good chance that allergens from outside will have wafted into your living space, which can exacerbate your symptoms.
Some allergy seasons are just worse than others, TBH. It all comes down to whether it’s warm or cool, dry or rainy, and so on. If the winter’s been pretty mild, you can bet that plants will start pollinating earlier, which translates to spring symptoms flaring up sooner.
Rain in the spring isn’t great for allergies either. It encourages plants to sprout up bigger and faster and encourages the growth of mold, both of which can make symptoms more intense, the ACAAI notes. In other words, this could be why your allergies are so bad all of a sudden.
It’s not your imagination: Allergy seasons are generally more hellish today than they used to be. “Seasonal allergies are most commonly caused by pollen,” Dr. Caudle says. And the changing climate has led to pollen seasons starting 20 days earlier, lasting 10 days longer, and featuring 21 percent more pollen in recent years compared to 1990, according to the National Institute of Food and Agriculture.
In other words, the more our climate fluctuates over the years, the worse (and more unpredictable) allergy seasons may become.
Certain habits you might not think twice about can affect your allergen exposure, and therefore, the severity of your symptoms. Some common ones include the following, per the Mayo Clinic and the American Academy of Allergy Asthma & Immunology (AAAAI):
A lot of things about allergy season are beyond your control (like, you can’t change the weather). But there’s still a lot you can do to keep your symptoms in check. “When I am treating my patient’s allergy symptoms in the office, I not only discuss appropriate medication options, but I also discuss lifestyle changes that will benefit them as well,” Dr. Caudle says. Here are a few:
Avoiding your allergen as much as you can is your best bet, Dr. Caudle notes. But when you’re dealing with environmental offenders like pollen or mold, it can be tough to steer clear 100 percent of the time. In that case, these tips can help, per the Mayo Clinic and the AAAI:
Another tip? Drinking coffee for seasonal allergies (especially in the morning alongside your antihistamines) could help alleviate symptoms like congestion, Purvi Parikh, MD, an allergist with the Allergy & Asthma Network, previously told Well+Good.
When lifestyle changes aren’t doing enough on their own, over-the-counter (OTC) or prescription allergy meds can help. Your primary care doctor can help you decide which med is right for you based on your symptoms and how severe they are. Some of the best medicine for allergies includes the following, per the Mayo Clinic:
Figuring out what helps with allergies may require a trip to your doctor. You can start by seeing your primary care doctor, who may refer you to an allergist if your symptoms are severe. An allergist can do a quick skin test to confirm what you’re actually allergic to and offer additional treatment options like allergy shots, which can make you less sensitive to your allergen over time.
You’ll get the best possible handle on your allergies by seeing an allergist, Dr. Caudle says. “You can get a proper diagnosis for your symptoms and then follow any treatment and prevention recommendations from your doctor.”
Usually, the right treatment plan should help you sidestep the worst symptoms, so your allergies are more of an occasional annoyance rather than an everyday-soul-sucker.
Honestly, why wait? If your allergies are annoying and you’re having any kind of trouble managing them on your own, make an appointment to see your doctor. They can confirm that you’re dealing with an allergy (as opposed to another health issue, like asthma) and help you come up with a treatment plan to help you feel your best.
And of course, if you have other symptoms like fever, sore throat, or nausea and vomiting, you may actually have an infection rather than allergies, which might need to be treated with antibiotics.
A common myth about allergies is that they get worse with age. But it really all depends on the person, and how severe their allergies were to begin with. Some people may notice their allergies improve after a certain age. Others, however, may get hit with new or worsening allergy symptoms as they get older, due to immune system changes, according to an April 2017 paper in Aging and Disease.
There’s definitely a lot of overlap between allergies and a cold or the flu: All can cause congestion, a runny nose, coughing, and generally leave you feeling lousy. Two key differences? “You can have a fever or body aches with a cold or the flu but not with allergies,” Dr. Caudle says. If you have a fever, your body may be trying to fight off an infection or virus.
May is typically peak pollen season across the U.S., according to the Allergy & Asthma Network. But there can be some variation based on where you live, with warmer climates potentially peaking earlier and cooler climates peaking later. Some people may even get summer allergies or fall allergies, as temperatures fluctuate and different plants start to bloom.
It can be difficult to tell which gives you a better bang for your buck—but not anymore!
It’s that time of year again, folks. The time where you always seem to ask yourself: Why are my allergies so bad right now? The itchy, sneezy, watery-eyed mess you become with the change of the seasons is often tied to whatever is blooming outdoors. In spring and fall, for example, pollen and ragweed counts are high, which can make your allergy symptoms worse during these times of year. But what does it mean if your allergies feel worse this year than ever before?
Truth is, allergy symptoms can vary a lot from one year to the next, says Jen Caudle, DO, a family medicine practitioner in Washington Township, New Jersey. A lot of it has to do with weather-related stuff, but lifestyle habits can also play a role, too.
Here, we’ll dive into the most common allergy culprits right now, and what to do to feel your best—no matter the pollen count.
Beyond what’s in bloom, there are other factors that can contribute to worsening allergy symptoms. This includes:
“Seasonal allergy symptoms tend to flare up in the spring and the fall,” Dr. Caudle says. That said, not every single day throughout a given season will feel the same. Shifting weather and other factors can change how much of an allergen is in your environment, which can affect your symptoms, per the American College of Allergy, Asthma, & Immunology (ACAAI). Some of these factors can include the following:
Exposure to things like smoke or strong odors might set off your symptoms or make existing symptoms worse, Dr. Caudle says. Same goes for whatever’s happening in your indoor environment. For example, if you normally shut the windows but they’ve been open all day, there’s a good chance that allergens from outside will have wafted into your living space, which can exacerbate your symptoms.
Some allergy seasons are just worse than others, TBH. It all comes down to whether it’s warm or cool, dry or rainy, and so on. If the winter’s been pretty mild, you can bet that plants will start pollinating earlier, which translates to spring symptoms flaring up sooner.
Rain in the spring isn’t great for allergies either. It encourages plants to sprout up bigger and faster and encourages the growth of mold, both of which can make symptoms more intense, the ACAAI notes. In other words, this could be why your allergies are so bad all of a sudden.
It’s not your imagination: Allergy seasons are generally more hellish today than they used to be. “Seasonal allergies are most commonly caused by pollen,” Dr. Caudle says. And the changing climate has led to pollen seasons starting 20 days earlier, lasting 10 days longer, and featuring 21 percent more pollen in recent years compared to 1990, according to the National Institute of Food and Agriculture.
In other words, the more our climate fluctuates over the years, the worse (and more unpredictable) allergy seasons may become.
Certain habits you might not think twice about can affect your allergen exposure, and therefore, the severity of your symptoms. Some common ones include the following, per the Mayo Clinic and the American Academy of Allergy Asthma & Immunology (AAAAI):
A lot of things about allergy season are beyond your control (like, you can’t change the weather). But there’s still a lot you can do to keep your symptoms in check. “When I am treating my patient’s allergy symptoms in the office, I not only discuss appropriate medication options, but I also discuss lifestyle changes that will benefit them as well,” Dr. Caudle says. Here are a few:
Avoiding your allergen as much as you can is your best bet, Dr. Caudle notes. But when you’re dealing with environmental offenders like pollen or mold, it can be tough to steer clear 100 percent of the time. In that case, these tips can help, per the Mayo Clinic and the AAAI:
Another tip? Drinking coffee for seasonal allergies (especially in the morning alongside your antihistamines) could help alleviate symptoms like congestion, Purvi Parikh, MD, an allergist with the Allergy & Asthma Network, previously told Well+Good.
When lifestyle changes aren’t doing enough on their own, over-the-counter (OTC) or prescription allergy meds can help. Your primary care doctor can help you decide which med is right for you based on your symptoms and how severe they are. Some of the best medicine for allergies includes the following, per the Mayo Clinic:
Figuring out what helps with allergies may require a trip to your doctor. You can start by seeing your primary care doctor, who may refer you to an allergist if your symptoms are severe. An allergist can do a quick skin test to confirm what you’re actually allergic to and offer additional treatment options like allergy shots, which can make you less sensitive to your allergen over time.
You’ll get the best possible handle on your allergies by seeing an allergist, Dr. Caudle says. “You can get a proper diagnosis for your symptoms and then follow any treatment and prevention recommendations from your doctor.”
Usually, the right treatment plan should help you sidestep the worst symptoms, so your allergies are more of an occasional annoyance rather than an everyday-soul-sucker.
Honestly, why wait? If your allergies are annoying and you’re having any kind of trouble managing them on your own, make an appointment to see your doctor. They can confirm that you’re dealing with an allergy (as opposed to another health issue, like asthma) and help you come up with a treatment plan to help you feel your best.
And of course, if you have other symptoms like fever, sore throat, or nausea and vomiting, you may actually have an infection rather than allergies, which might need to be treated with antibiotics.
A common myth about allergies is that they get worse with age. But it really all depends on the person, and how severe their allergies were to begin with. Some people may notice their allergies improve after a certain age. Others, however, may get hit with new or worsening allergy symptoms as they get older, due to immune system changes, according to an April 2017 paper in Aging and Disease.
There’s definitely a lot of overlap between allergies and a cold or the flu: All can cause congestion, a runny nose, coughing, and generally leave you feeling lousy. Two key differences? “You can have a fever or body aches with a cold or the flu but not with allergies,” Dr. Caudle says. If you have a fever, your body may be trying to fight off an infection or virus.
May is typically peak pollen season across the U.S., according to the Allergy & Asthma Network. But there can be some variation based on where you live, with warmer climates potentially peaking earlier and cooler climates peaking later. Some people may even get summer allergies or fall allergies, as temperatures fluctuate and different plants start to bloom.
Do you find yourself designing screens with only a vague idea of how the things on the screen relate to the things elsewhere in the system? Do you leave stakeholder meetings with unclear directives that often seem to contradict previous conversations? You know a better understanding of user needs would help the team get clear on what you are actually trying to accomplish, but time and budget for research is tight. When it comes to asking for more direct contact with your users, you might feel like poor Oliver Twist, timidly asking, “Please, sir, I want some more.”
Here’s the trick. You need to get stakeholders themselves to identify high-risk assumptions and hidden complexity, so that they become just as motivated as you to get answers from users. Basically, you need to make them think it’s their idea.
In this article, I’ll show you how to collaboratively expose misalignment and gaps in the team’s shared understanding by bringing the team together around two simple questions:
These two questions align to the first two steps of the ORCA process, which might become your new best friend when it comes to reducing guesswork. Wait, what’s ORCA?! Glad you asked.
ORCA stands for Objects, Relationships, CTAs, and Attributes, and it outlines a process for creating solid object-oriented user experiences. Object-oriented UX is my design philosophy. ORCA is an iterative methodology for synthesizing user research into an elegant structural foundation to support screen and interaction design. OOUX and ORCA have made my work as a UX designer more collaborative, effective, efficient, fun, strategic, and meaningful.
The ORCA process has four iterative rounds and a whopping fifteen steps. In each round we get more clarity on our Os, Rs, Cs, and As.
I sometimes say that ORCA is a “garbage in, garbage out” process. To ensure that the testable prototype produced in the final round actually tests well, the process needs to be fed by good research. But if you don’t have a ton of research, the beginning of the ORCA process serves another purpose: it helps you sell the need for research.
In other words, the ORCA process serves as a gauntlet between research and design. With good research, you can gracefully ride the killer whale from research into design. But without good research, the process effectively spits you back into research and with a cache of specific open questions.
What gets us into trouble is not what we don’t know. It’s what we know for sure that just ain’t so.
Mark Twain
The first two steps of the ORCA process—Object Discovery and Relationship Discovery—shine a spotlight on the dark, dusty corners of your team’s misalignments and any inherent complexity that’s been swept under the rug. It begins to expose what this classic comic so beautifully illustrates:
This is one reason why so many UX designers are frustrated in their job and why many projects fail. And this is also why we often can’t sell research: every decision-maker is confident in their own mental picture.
Once we expose hidden fuzzy patches in each picture and the differences between them all, the case for user research makes itself.
But how we do this is important. However much we might want to, we can’t just tell everyone, “YOU ARE WRONG!” Instead, we need to facilitate and guide our team members to self-identify holes in their picture. When stakeholders take ownership of assumptions and gaps in understanding, BAM! Suddenly, UX research is not such a hard sell, and everyone is aboard the same curiosity-boat.
Say your users are doctors. And you have no idea how doctors use the system you are tasked with redesigning.
You might try to sell research by honestly saying: “We need to understand doctors better! What are their pain points? How do they use the current app?” But here’s the problem with that. Those questions are vague, and the answers to them don’t feel acutely actionable.
Instead, you want your stakeholders themselves to ask super-specific questions. This is more like the kind of conversation you need to facilitate. Let’s listen in:
“Wait a sec, how often do doctors share patients? Does a patient in this system have primary and secondary doctors?”
“Can a patient even have more than one primary doctor?”
“Is it a ‘primary doctor’ or just a ‘primary caregiver’… Can’t that role be a nurse practitioner?”
“No, caregivers are something else… That’s the patient’s family contacts, right?”
“So are caregivers in scope for this redesign?”
“Yeah, because if a caregiver is present at an appointment, the doctor needs to note that. Like, tag the caregiver on the note… Or on the appointment?”
Now we are getting somewhere. Do you see how powerful it can be getting stakeholders to debate these questions themselves? The diabolical goal here is to shake their confidence—gently and diplomatically.
When these kinds of questions bubble up collaboratively and come directly from the mouths of your stakeholders and decision-makers, suddenly, designing screens without knowing the answers to these questions seems incredibly risky, even silly.
If we create software without understanding the real-world information environment of our users, we will likely create software that does not align to the real-world information environment of our users. And this will, hands down, result in a more confusing, more complex, and less intuitive software product.
But how do we get to these kinds of meaty questions diplomatically, efficiently, collaboratively, and reliably?
We can do this by starting with those two big questions that align to the first two steps of the ORCA process:
In practice, getting to these answers is easier said than done. I’m going to show you how these two simple questions can provide the outline for an Object Definition Workshop. During this workshop, these “seed” questions will blossom into dozens of specific questions and shine a spotlight on the need for more user research.
In the next section, I’ll show you how to run an Object Definition Workshop with your stakeholders (and entire cross-functional team, hopefully). But first, you need to do some prep work.
Basically, look for nouns that are particular to the business or industry of your project, and do it across at least a few sources. I call this noun foraging.
Here are just a few great noun foraging sources:
Put your detective hat on, my dear Watson. Get resourceful and leverage what you have. If all you have is a marketing website, some screenshots of the existing legacy system, and access to customer service chat logs, then use those.
As you peruse these sources, watch for the nouns that are used over and over again, and start listing them (preferably on blue sticky notes if you’ll be creating an object map later!).
You’ll want to focus on nouns that might represent objects in your system. If you are having trouble determining if a noun might be object-worthy, remember the acronym SIP and test for:
Think of a library app, for example. Is “book” an object?
Structure: can you think of a few attributes for this potential object? Title, author, publish date… Yep, it has structure. Check!
Instance: what are some examples of this potential “book” object? Can you name a few? The Alchemist, Ready Player One, Everybody Poops… OK, check!
Purpose: why is this object important to the users and business? Well, “book” is what our library client is providing to people and books are why people come to the library… Check, check, check!
As you are noun foraging, focus on capturing the nouns that have SIP. Avoid capturing components like dropdowns, checkboxes, and calendar pickers—your UX system is not your design system! Components are just the packaging for objects—they are a means to an end. No one is coming to your digital place to play with your dropdown! They are coming for the VALUABLE THINGS and what they can do with them. Those things, or objects, are what we are trying to identify.
Let’s say we work for a startup disrupting the email experience. This is how I’d start my noun foraging.
First I’d look at my own email client, which happens to be Gmail. I’d then look at Outlook and the new HEY email. I’d look at Yahoo, Hotmail…I’d even look at Slack and Basecamp and other so-called “email replacers.” I’d read some articles, reviews, and forum threads where people are complaining about email. While doing all this, I would look for and write down the nouns.
(Before moving on, feel free to go noun foraging for this hypothetical product, too, and then scroll down to see how much our lists match up. Just don’t get lost in your own emails! Come back to me!)
Drumroll, please…
Here are a few nouns I came up with during my noun foraging:
Scan your list of nouns and pick out words that you are completely clueless about. In our email example, it might be client or automation. Do as much homework as you can before your session with stakeholders: google what’s googleable. But other terms might be so specific to the product or domain that you need to have a conversation about them.
Aside: here are some real nouns foraged during my own past project work that I needed my stakeholders to help me understand:
This is really all you need to prepare for the workshop session: a list of nouns that represent potential objects and a short list of nouns that need to be defined further.
You could actually start your workshop with noun foraging—this activity can be done collaboratively. If you have five people in the room, pick five sources, assign one to every person, and give everyone ten minutes to find the objects within their source. When the time’s up, come together and find the overlap. Affinity mapping is your friend here!
If your team is short on time and might be reluctant to do this kind of grunt work (which is usually the case) do your own noun foraging beforehand, but be prepared to show your work. I love presenting screenshots of documents and screens with all the nouns already highlighted. Bring the artifacts of your process, and start the workshop with a five-minute overview of your noun foraging journey.
HOT TIP: before jumping into the workshop, frame the conversation as a requirements-gathering session to help you better understand the scope and details of the system. You don’t need to let them know that you’re looking for gaps in the team’s understanding so that you can prove the need for more user research—that will be our little secret. Instead, go into the session optimistically, as if your knowledgeable stakeholders and PMs and biz folks already have all the answers.
Then, let the question whack-a-mole commence.
Want to have some real fun? At the beginning of your session, ask stakeholders to privately write definitions for the handful of obscure nouns you might be uncertain about. Then, have everyone show their cards at the same time and see if you get different definitions (you will). This is gold for exposing misalignment and starting great conversations.
As your discussion unfolds, capture any agreed-upon definitions. And when uncertainty emerges, quietly (but visibly) start an “open questions” parking lot. 😉
After definitions solidify, here’s a great follow-up:
Stakeholder 1: They probably call email clients “apps.” But I’m not sure.
Stakeholder 2: Automations are often called “workflows,” I think. Or, maybe users think workflows are something different.
If a more user-friendly term emerges, ask the group if they can agree to use only that term moving forward. This way, the team can better align to the users’ language and mindset.
OK, moving on.
If you have two or more objects that seem to overlap in purpose, ask one of these questions:
You: Is a saved response the same as a template?
Stakeholder 1: Yes! Definitely.
Stakeholder 2: I don’t think so… A saved response is text with links and variables, but a template is more about the look and feel, like default fonts, colors, and placeholder images.
Continue to build out your growing glossary of objects. And continue to capture areas of uncertainty in your “open questions” parking lot.
If you successfully determine that two similar things are, in fact, different, here’s your next follow-up question:
You: Are saved responses and templates related in any way?
Stakeholder 3: Yeah, a template can be applied to a saved response.
You, always with the follow-ups: When is the template applied to a saved response? Does that happen when the user is constructing the saved response? Or when they apply the saved response to an email? How does that actually work?
Listen. Capture uncertainty. Once the list of “open questions” grows to a critical mass, pause to start assigning questions to groups or individuals. Some questions might be for the dev team (hopefully at least one developer is in the room with you). One question might be specifically for someone who couldn’t make it to the workshop. And many questions will need to be labeled “user.”
Do you see how we are building up to our UXR sales pitch?
Your next question narrows the team’s focus toward what’s most important to your users. You can simply ask, “Are saved responses in scope for our first release?,” but I’ve got a better, more devious strategy.
By now, you should have a list of clearly defined objects. Ask participants to sort these objects from most to least important, either in small breakout groups or individually. Then, like you did with the definitions, have everyone reveal their sort order at once. Surprisingly—or not so surprisingly—it’s not unusual for the VP to rank something like “saved responses” as #2 while everyone else puts it at the bottom of the list. Try not to look too smug as you inevitably expose more misalignment.
I did this for a startup a few years ago. We posted the three groups’ wildly different sort orders on the whiteboard.
The CEO stood back, looked at it, and said, “This is why we haven’t been able to move forward in two years.”
Admittedly, it’s tragic to hear that, but as a professional, it feels pretty awesome to be the one who facilitated a watershed realization.
Once you have a good idea of in-scope, clearly defined things, this is when you move on to doing more relationship mapping.
We’ve already done a bit of this while trying to determine if two things are different, but this time, ask the team about every potential relationship. For each object, ask how it relates to all the other objects. In what ways are the objects connected? To visualize all the connections, pull out your trusty boxes-and-arrows technique. Here, we are connecting our objects with verbs. I like to keep my verbs to simple “has a” and “has many” statements.
This system modeling activity brings up all sorts of new questions:
Solid answers might emerge directly from the workshop participants. Great! Capture that new shared understanding. But when uncertainty surfaces, continue to add questions to your growing parking lot.
You’ve positioned the explosives all along the floodgates. Now you simply have to light the fuse and BOOM. Watch the buy-in for user research flooooow.
Before your workshop wraps up, have the group reflect on the list of open questions. Make plans for getting answers internally, then focus on the questions that need to be brought before users.
Here’s your final step. Take those questions you’ve compiled for user research and discuss the level of risk associated with NOT answering them. Ask, “if we design without an answer to this question, if we make up our own answer and we are wrong, how bad might that turn out?”
With this methodology, we are cornering our decision-makers into advocating for user research as they themselves label questions as high-risk. Sorry, not sorry.
Now is your moment of truth. With everyone in the room, ask for a reasonable budget of time and money to conduct 6–8 user interviews focused specifically on these questions.
HOT TIP: if you are new to UX research, please note that you’ll likely need to rephrase the questions that came up during the workshop before you present them to users. Make sure your questions are open-ended and don’t lead the user into any default answers.
Seriously, if at all possible, do not ever design screens again without first answering these fundamental questions: what are the objects and how do they relate?
I promise you this: if you can secure a shared understanding between the business, design, and development teams before you start designing screens, you will have less heartache and save more time and money, and (it almost feels like a bonus at this point!) users will be more receptive to what you put out into the world.
I sincerely hope this helps you win time and budget to go talk to your users and gain clarity on what you are designing before you start building screens. If you find success using noun foraging and the Object Definition Workshop, there’s more where that came from in the rest of the ORCA process, which will help prevent even more late-in-the-game scope tugs-of-war and strategy pivots.
All the best of luck! Now go sell research!
CSS is about styling boxes. In fact, the whole web is made of boxes, from the browser viewport to elements on a page. But every once in a while a new feature comes along that makes us rethink our design approach.
Round displays, for example, make it fun to play with circular clip areas. Mobile screen notches and virtual keyboards offer challenges to best organize content that stays clear of them. And dual screen or foldable devices make us rethink how to best use available space in a number of different device postures.
These recent evolutions of the web platform made it both more challenging and more interesting to design products. They’re great opportunities for us to break out of our rectangular boxes.
I’d like to talk about a new feature similar to the above: the Window Controls Overlay for Progressive Web Apps (PWAs).
Progressive Web Apps are blurring the lines between apps and websites. They combine the best of both worlds. On one hand, they’re stable, linkable, searchable, and responsive just like websites. On the other hand, they provide additional powerful capabilities, work offline, and read files just like native apps.
As a design surface, PWAs are really interesting because they challenge us to think about what mixing web and device-native user interfaces can be. On desktop devices in particular, we have more than 40 years of history telling us what applications should look like, and it can be hard to break out of this mental model.
At the end of the day though, PWAs on desktop are constrained to the window they appear in: a rectangle with a title bar at the top.
Here’s what a typical desktop PWA app looks like:
Sure, as the author of a PWA, you get to choose the color of the title bar (using the Web Application Manifest theme_color property), but that’s about it.
What if we could think outside this box, and reclaim the real estate of the app’s entire window? Doing so would give us a chance to make our apps more beautiful and feel more integrated in the operating system.
This is exactly what the Window Controls Overlay offers. This new PWA functionality makes it possible to take advantage of the full surface area of the app, including where the title bar normally appears.
Let’s start with an explanation of what the title bar and window controls are.
The title bar is the area displayed at the top of an app window, which usually contains the app’s name. Window controls are the affordances, or buttons, that make it possible to minimize, maximize, or close the app’s window, and are also displayed at the top.
Window Controls Overlay removes the physical constraint of the title bar and window controls areas. It frees up the full height of the app window, enabling the title bar and window control buttons to be overlaid on top of the application’s web content.
If you are reading this article on a desktop computer, take a quick look at other apps. Chances are they’re already doing something similar to this. In fact, the very web browser you are using to read this uses the top area to display tabs.
Spotify displays album artwork all the way to the top edge of the application window.
Microsoft Word uses the available title bar space to display the auto-save and search functionalities, and more.
The whole point of this feature is to allow you to make use of this space with your own content while providing a way to account for the window control buttons. And it enables you to offer this modified experience on a range of platforms while not adversely affecting the experience on browsers or devices that don’t support Window Controls Overlay. After all, PWAs are all about progressive enhancement, so this feature is a chance to enhance your app to use this extra space when it’s available.
For the rest of this article, we’ll be working on a demo app to learn more about using the feature.
The demo app is called 1DIV. It’s a simple CSS playground where users can create designs using CSS and a single HTML element.
The app has two pages. The first lists the existing CSS designs you’ve created:
The second page enables you to create and edit CSS designs:
Since I’ve added a simple web manifest and service worker, we can install the app as a PWA on desktop. Here is what it looks like on macOS:
And on Windows:
Our app is looking good, but the white title bar in the first page is wasted space. In the second page, it would be really nice if the design area went all the way to the top of the app window.
Let’s use the Window Controls Overlay feature to improve this.
The feature is still experimental at the moment. To try it, you need to enable it in one of the supported browsers.
As of now, it has been implemented in Chromium, as a collaboration between Microsoft and Google. We can therefore use it in Chrome or Edge by going to the internal about://flags page, and enabling the Desktop PWA Window Controls Overlay flag.
To use the feature, we need to add the following display_override member to our web app’s manifest file:
{
"name": "1DIV",
"description": "1DIV is a mini CSS playground",
"lang": "en-US",
"start_url": "/",
"theme_color": "#ffffff",
"background_color": "#ffffff",
"display_override": [
"window-controls-overlay"
],
"icons": [
...
]
}
On the surface, the feature is really simple to use. This manifest change is the only thing we need to make the title bar disappear and turn the window controls into an overlay.
However, to provide a great experience for all users regardless of what device or browser they use, and to make the most of the title bar area in our design, we’ll need a bit of CSS and JavaScript code.
Here is what the app looks like now:
The title bar is gone, which is what we wanted, but our logo, search field, and NEW button are partially covered by the window controls because now our layout starts at the top of the window.
It’s similar on Windows, with the difference that the close, maximize, and minimize buttons appear on the right side, grouped together with the PWA control buttons:
Along with the feature, new CSS environment variables have been introduced:
titlebar-area-xtitlebar-area-ytitlebar-area-widthtitlebar-area-heightYou use these variables with the CSS env() function to position your content where the title bar would have been while ensuring it won’t overlap with the window controls. In our case, we’ll use two of the variables to position our header, which contains the logo, search bar, and NEW button.
header {
position: absolute;
left: env(titlebar-area-x, 0);
width: env(titlebar-area-width, 100%);
height: var(--toolbar-height);
}
The titlebar-area-x variable gives us the distance from the left of the viewport to where the title bar would appear, and titlebar-area-width is its width. (Remember, this is not equivalent to the width of the entire viewport, just the title bar portion, which as noted earlier, doesn’t include the window controls.)
By doing this, we make sure our content remains fully visible. We’re also defining fallback values (the second parameter in the env() function) for when the variables are not defined (such as on non-supporting browsers, or when the Windows Control Overlay feature is disabled).
Now our header adapts to its surroundings, and it doesn’t feel like the window control buttons have been added as an afterthought. The app looks a lot more like a native app.
Now let’s take a closer look at our second page: the CSS playground editor.
Not great. Our CSS demo area does go all the way to the top, which is what we wanted, but the way the window controls appear as white rectangles on top of it is quite jarring.
We can fix this by changing the app’s theme color. There are a couple of ways to define it:
theme_color.In our case, we can set the manifest theme_color to white to provide the right default color for our app. The OS will read this color value when the app is installed and use it to make the window controls background color white. This color works great for our main page with the list of demos.
The theme-color meta tag can be changed at runtime, using JavaScript. So we can do that to override the white with the right demo background color when one is opened.
Here is the function we’ll use:
function themeWindow(bgColor) {
document.querySelector("meta[name=theme-color]").setAttribute('content', bgColor);
}
With this in place, we can imagine how using color and CSS transitions can produce a smooth change from the list page to the demo page, and enable the window control buttons to blend in with the rest of the app’s interface.
Now, getting rid of the title bar entirely does have an important accessibility consequence: it’s much more difficult to move the application window around.
The title bar provides a sizable area for users to click and drag, but by using the Window Controls Overlay feature, this area becomes limited to where the control buttons are, and users have to very precisely aim between these buttons to move the window.
Fortunately, this can be fixed using CSS with the app-region property. This property is, for now, only supported in Chromium-based browsers and needs the -webkit- vendor prefix.
To make any element of the app become a dragging target for the window, we can use the following:
-webkit-app-region: drag;
It is also possible to explicitly make an element non-draggable:
-webkit-app-region: no-drag;
These options can be useful for us. We can make the entire header a dragging target, but make the search field and NEW button within it non-draggable so they can still be used as normal.
However, because the editor page doesn’t display the header, users wouldn’t be able to drag the window while editing code. So let’s use a different approach. We’ll create another element before our header, also absolutely positioned, and dedicated to dragging the window.
...
.drag {
position: absolute;
top: 0;
width: 100%;
height: env(titlebar-area-height, 0);
-webkit-app-region: drag;
}
With the above code, we’re making the draggable area span the entire viewport width, and using the titlebar-area-height variable to make it as tall as what the title bar would have been. This way, our draggable area is aligned with the window control buttons as shown below.
And, now, to make sure our search field and button remain usable:
header .search,
header .new {
-webkit-app-region: no-drag;
}
With the above code, users can click and drag where the title bar used to be. It is an area that users expect to be able to use to move windows on desktop, and we’re not breaking this expectation, which is good.
It may be useful for an app to know both whether the window controls overlay is visible and when its size changes. In our case, if the user made the window very narrow, there wouldn’t be enough space for the search field, logo, and button to fit, so we’d want to push them down a bit.
The Window Controls Overlay feature comes with a JavaScript API we can use to do this: navigator.windowControlsOverlay.
The API provides three interesting things:
navigator.windowControlsOverlay.visible lets us know whether the overlay is visible.navigator.windowControlsOverlay.getBoundingClientRect() lets us know the position and size of the title bar area.navigator.windowControlsOverlay.ongeometrychange lets us know when the size or visibility changes.Let’s use this to be aware of the size of the title bar area and move the header down if it’s too narrow.
if (navigator.windowControlsOverlay) {
navigator.windowControlsOverlay.addEventListener('geometrychange', () => {
const { width } = navigator.windowControlsOverlay.getBoundingClientRect();
document.body.classList.toggle('narrow', width < 250);
});
}
In the example above, we set the narrow class on the body of the app if the title bar area is narrower than 250px. We could do something similar with a media query, but using the windowControlsOverlay API has two advantages for our use case:
.narrow header {
top: env(titlebar-area-height, 0);
left: 0;
width: 100%;
}
Using the above CSS code, we can move our header down to stay clear of the window control buttons when the window is too narrow, and move the thumbnails down accordingly.
Using the Window Controls Overlay feature, we were able to take our simple demo app and turn it into something that feels so much more integrated on desktop devices. Something that reaches out of the usual window constraints and provides a custom experience for its users.
In reality, this feature only gives us about 30 pixels of extra room and comes with challenges on how to deal with the window controls. And yet, this extra room and those challenges can be turned into exciting design opportunities.
More devices of all shapes and forms get invented all the time, and the web keeps on evolving to adapt to them. New features get added to the web platform to allow us, web authors, to integrate more and more deeply with those devices. From watches or foldable devices to desktop computers, we need to evolve our design approach for the web. Building for the web now lets us think outside the rectangular box.
So let’s embrace this. Let’s use the standard technologies already at our disposal, and experiment with new ideas to provide tailored experiences for all devices, all from a single codebase!
If you get a chance to try the Window Controls Overlay feature and have feedback about it, you can open issues on the spec’s repository. It’s still early in the development of this feature, and you can help make it even better. Or, you can take a look at the feature’s existing documentation, or this demo app and its source code.
About two and a half years ago, I introduced the idea of daily ethical design. It was born out of my frustration with the many obstacles to achieving design that’s usable and equitable; protects people’s privacy, agency, and focus; benefits society; and restores nature. I argued that we need to overcome the inconveniences that prevent us from acting ethically and that we need to elevate design ethics to a more practical level by structurally integrating it into our daily work, processes, and tools.
Unfortunately, we’re still very far from this ideal.
At the time, I didn’t know yet how to structurally integrate ethics. Yes, I had found some tools that had worked for me in previous projects, such as using checklists, assumption tracking, and “dark reality” sessions, but I didn’t manage to apply those in every project. I was still struggling for time and support, and at best I had only partially achieved a higher (moral) quality of design—which is far from my definition of structurally integrated.
I decided to dig deeper for the root causes in business that prevent us from practicing daily ethical design. Now, after much research and experimentation, I believe that I’ve found the key that will let us structurally integrate ethics. And it’s surprisingly simple! But first we need to zoom out to get a better understanding of what we’re up against.
Sadly, we’re trapped in a capitalistic system that reinforces consumerism and inequality, and it’s obsessed with the fantasy of endless growth. Sea levels, temperatures, and our demand for energy continue to rise unchallenged, while the gap between rich and poor continues to widen. Shareholders expect ever-higher returns on their investments, and companies feel forced to set short-term objectives that reflect this. Over the last decades, those objectives have twisted our well-intended human-centered mindset into a powerful machine that promotes ever-higher levels of consumption. When we’re working for an organization that pursues “double-digit growth” or “aggressive sales targets” (which is 99 percent of us), that’s very hard to resist while remaining human friendly. Even with our best intentions, and even though we like to say that we create solutions for people, we’re a part of the problem.
What can we do to change this?
We can start by acting on the right level of the system. Donella H. Meadows, a system thinker, once listed ways to influence a system in order of effectiveness. When you apply these to design, you get:
The takeaway? If we truly want to incorporate ethics into our daily design practice, we must first change the measurable objectives of the company we work for, from the bottom up.
Traditionally, we consider a product or service successful if it’s desirable to humans, technologically feasible, and financially viable. You tend to see these represented as equals; if you type the three words in a search engine, you’ll find diagrams of three equally sized, evenly arranged circles.
But in our hearts, we all know that the three dimensions aren’t equally weighted: it’s viability that ultimately controls whether a product will go live. So a more realistic representation might look like this:
Desirability and feasibility are the means; viability is the goal. Companies—outside of nonprofits and charities—exist to make money.
A genuinely purpose-driven company would try to reverse this dynamic: it would recognize finance for what it was intended for: a means. So both feasibility and viability are means to achieve what the company set out to achieve. It makes intuitive sense: to achieve most anything, you need resources, people, and money. (Fun fact: the Italian language knows no difference between feasibility and viability; both are simply fattibilità.)
But simply swapping viable for desirable isn’t enough to achieve an ethical outcome. Desirability is still linked to consumerism because the associated activities aim to identify what people want—whether it’s good for them or not. Desirability objectives, such as user satisfaction or conversion, don’t consider whether a product is healthy for people. They don’t prevent us from creating products that distract or manipulate people or stop us from contributing to society’s wealth inequality. They’re unsuitable for establishing a healthy balance with nature.
There’s a fourth dimension of success that’s missing: our designs also need to be ethical in the effect that they have on the world.
This is hardly a new idea. Many similar models exist, some calling the fourth dimension accountability, integrity, or responsibility. What I’ve never seen before, however, is the necessary step that comes after: to influence the system as designers and to make ethical design more practical, we must create objectives for ethical design that are achievable and inspirational. There’s no one way to do this because it highly depends on your culture, values, and industry. But I’ll give you the version that I developed with a group of colleagues at a design agency. Consider it a template to get started.
We created objectives that address design’s effect on three levels: individual, societal, and global.
An objective on the individual level tells us what success is beyond the typical focus of usability and satisfaction—instead considering matters such as how much time and attention is required from users. We pursued well-being:
We create products and services that allow for people’s health and happiness. Our solutions are calm, transparent, nonaddictive, and nonmisleading. We respect our users’ time, attention, and privacy, and help them make healthy and respectful choices.
An objective on the societal level forces us to consider our impact beyond just the user, widening our attention to the economy, communities, and other indirect stakeholders. We called this objective equity:
We create products and services that have a positive social impact. We consider economic equality, racial justice, and the inclusivity and diversity of people as teams, users, and customer segments. We listen to local culture, communities, and those we affect.
Finally, the objective on the global level aims to ensure that we remain in balance with the only home we have as humanity. Referring to it simply as sustainability, our definition was:
We create products and services that reward sufficiency and reusability. Our solutions support the circular economy: we create value from waste, repurpose products, and prioritize sustainable choices. We deliver functionality instead of ownership, and we limit energy use.
In short, ethical design (to us) meant achieving wellbeing for each user and an equitable value distribution within society through a design that can be sustained by our living planet. When we introduced these objectives in the company, for many colleagues, design ethics and responsible design suddenly became tangible and achievable through practical—and even familiar—actions.
But defining these objectives still isn’t enough. What truly caught the attention of senior management was the fact that we created a way to measure every design project’s well-being, equity, and sustainability.
This overview lists example metrics that you can use as you pursue well-being, equity, and sustainability:
There’s a lot of power in measurement. As the saying goes, what gets measured gets done. Donella Meadows once shared this example:
“If the desired system state is national security, and that is defined as the amount of money spent on the military, the system will produce military spending. It may or may not produce national security.”
This phenomenon explains why desirability is a poor indicator of success: it’s typically defined as the increase in customer satisfaction, session length, frequency of use, conversion rate, churn rate, download rate, and so on. But none of these metrics increase the health of people, communities, or ecosystems. What if instead we measured success through metrics for (digital) well-being, such as (reduced) screen time or software energy consumption?
There’s another important message here. Even if we set an objective to build a calm interface, if we were to choose the wrong metric for calmness—say, the number of interface elements—we could still end up with a screen that induces anxiety. Choosing the wrong metric can completely undo good intentions.
Additionally, choosing the right metric is enormously helpful in focusing the design team. Once you go through the exercise of choosing metrics for our objectives, you’re forced to consider what success looks like concretely and how you can prove that you’ve reached your ethical objectives. It also forces you to consider what we as designers have control over: what can I include in my design or change in my process that will lead to the right type of success? The answer to this question brings a lot of clarity and focus.
And finally, it’s good to remember that traditional businesses run on measurements, and managers love to spend much time discussing charts (ideally hockey-stick shaped)—especially if they concern profit, the one-above-all of metrics. For good or ill, to improve the system, to have a serious discussion about ethical design with managers, we’ll need to speak that business language.
Once you’ve defined your objectives and you have a reasonable idea of the potential metrics for your design project, only then do you have a chance to structurally practice ethical design. It “simply” becomes a matter of using your creativity and choosing from all the knowledge and toolkits already available to you.
I think this is quite exciting! It opens a whole new set of challenges and considerations for the design process. Should you go with that energy-consuming video or would a simple illustration be enough? Which typeface is the most calm and inclusive? Which new tools and methods do you use? When is the website’s end of life? How can you provide the same service while requiring less attention from users? How do you make sure that those who are affected by decisions are there when those decisions are made? How can you measure our effects?
The redefinition of success will completely change what it means to do good design.
There is, however, a final piece of the puzzle that’s missing: convincing your client, product owner, or manager to be mindful of well-being, equity, and sustainability. For this, it’s essential to engage stakeholders in a dedicated kickoff session.
The kickoff is the most important meeting that can be so easy to forget to include. It consists of two major phases: 1) the alignment of expectations, and 2) the definition of success.
In the first phase, the entire (design) team goes over the project brief and meets with all the relevant stakeholders. Everyone gets to know one another and express their expectations on the outcome and their contributions to achieving it. Assumptions are raised and discussed. The aim is to get on the same level of understanding and to in turn avoid preventable miscommunications and surprises later in the project.
For example, for a recent freelance project that aimed to design a digital platform that facilitates US student advisors’ documentation and communication, we conducted an online kickoff with the client, a subject-matter expert, and two other designers. We used a combination of canvases on Miro: one with questions from “Manual of Me” (to get to know each other), a Team Canvas (to express expectations), and a version of the Project Canvas to align on scope, timeline, and other practical matters.
The above is the traditional purpose of a kickoff. But just as important as expressing expectations is agreeing on what success means for the project—in terms of desirability, viability, feasibility, and ethics. What are the objectives in each dimension?
Agreement on what success means at such an early stage is crucial because you can rely on it for the remainder of the project. If, for example, the design team wants to build an inclusive app for a diverse user group, they can raise diversity as a specific success criterion during the kickoff. If the client agrees, the team can refer back to that promise throughout the project. “As we agreed in our first meeting, having a diverse user group that includes A and B is necessary to build a successful product. So we do activity X and follow research process Y.” Compare those odds to a situation in which the team didn’t agree to that beforehand and had to ask for permission halfway through the project. The client might argue that that came on top of the agreed scope—and she’d be right.
In the case of this freelance project, to define success I prepared a round canvas that I call the Wheel of Success. It consists of an inner ring, meant to capture ideas for objectives, and a set of outer rings, meant to capture ideas on how to measure those objectives. The rings are divided into five dimensions of successful design: healthy, equitable, sustainable, desirable, feasible, and viable.
We went through each dimension, writing down ideas on digital sticky notes. Then we discussed our ideas and verbally agreed on the most important ones. For example, our client agreed that sustainability and progressive enhancement are important success criteria for the platform. And the subject-matter expert emphasized the importance of including students from low-income and disadvantaged groups in the design process.
After the kickoff, we summarized our ideas and shared understanding in a project brief that captured these aspects:
With such a brief in place, you can use the agreed-upon objectives and concrete metrics as a checklist of success, and your design team will be ready to pursue the right objective—using the tools, methods, and metrics at their disposal to achieve ethical outcomes.
Over the past year, quite a few colleagues have asked me, “Where do I start with ethical design?” My answer has always been the same: organize a session with your stakeholders to (re)define success. Even though you might not always be 100 percent successful in agreeing on goals that cover all responsibility objectives, that beats the alternative (the status quo) every time. If you want to be an ethical, responsible designer, there’s no skipping this step.
To be even more specific: if you consider yourself a strategic designer, your challenge is to define ethical objectives, set the right metrics, and conduct those kick-off sessions. If you consider yourself a system designer, your starting point is to understand how your industry contributes to consumerism and inequality, understand how finance drives business, and brainstorm which levers are available to influence the system on the highest level. Then redefine success to create the space to exercise those levers.
And for those who consider themselves service designers or UX designers or UI designers: if you truly want to have a positive, meaningful impact, stay away from the toolkits and meetups and conferences for a while. Instead, gather your colleagues and define goals for well-being, equity, and sustainability through design. Engage your stakeholders in a workshop and challenge them to think of ways to achieve and measure those ethical goals. Take their input, make it concrete and visible, ask for their agreement, and hold them to it.
Otherwise, I’m genuinely sorry to say, you’re wasting your precious time and creative energy.
Of course, engaging your stakeholders in this way can be uncomfortable. Many of my colleagues expressed doubts such as “What will the client think of this?,” “Will they take me seriously?,” and “Can’t we just do it within the design team instead?” In fact, a product manager once asked me why ethics couldn’t just be a structured part of the design process—to just do it without spending the effort to define ethical objectives. It’s a tempting idea, right? We wouldn’t have to have difficult discussions with stakeholders about what values or which key-performance indicators to pursue. It would let us focus on what we like and do best: designing.
But as systems theory tells us, that’s not enough. For those of us who aren’t from marginalized groups and have the privilege to be able to speak up and be heard, that uncomfortable space is exactly where we need to be if we truly want to make a difference. We can’t remain within the design-for-designers bubble, enjoying our privileged working-from-home situation, disconnected from the real world out there. For those of us who have the possibility to speak up and be heard: if we solely keep talking about ethical design and it remains at the level of articles and toolkits—we’re not designing ethically. It’s just theory. We need to actively engage our colleagues and clients by challenging them to redefine success in business.
With a bit of courage, determination, and focus, we can break out of this cage that finance and business-as-usual have built around us and become facilitators of a new type of business that can see beyond financial value. We just need to agree on the right objectives at the start of each design project, find the right metrics, and realize that we already have everything that we need to get started. That’s what it means to do daily ethical design.
For their inspiration and support over the years, I would like to thank Emanuela Cozzi Schettini, José Gallegos, Annegret Bönemann, Ian Dorr, Vera Rademaker, Virginia Rispoli, Cecilia Scolaro, Rouzbeh Amini, and many others.
The mobile-first design methodology is great—it focuses on what really matters to the user, it’s well-practiced, and it’s been a common design pattern for years. So developing your CSS mobile-first should also be great, too…right?
Well, not necessarily. Classic mobile-first CSS development is based on the principle of overwriting style declarations: you begin your CSS with default style declarations, and overwrite and/or add new styles as you add breakpoints with min-width media queries for larger viewports (for a good overview see “What is Mobile First CSS and Why Does It Rock?”). But all those exceptions create complexity and inefficiency, which in turn can lead to an increased testing effort and a code base that’s harder to maintain. Admit it—how many of us willingly want that?
On your own projects, mobile-first CSS may yet be the best tool for the job, but first you need to evaluate just how appropriate it is in light of the visual design and user interactions you’re working on. To help you get started, here’s how I go about tackling the factors you need to watch for, and I’ll discuss some alternate solutions if mobile-first doesn’t seem to suit your project.
Some of the things to like with mobile-first CSS development—and why it’s been the de facto development methodology for so long—make a lot of sense:
Development hierarchy. One thing you undoubtedly get from mobile-first is a nice development hierarchy—you just focus on the mobile view and get developing.
Tried and tested. It’s a tried and tested methodology that’s worked for years for a reason: it solves a problem really well.
Prioritizes the mobile view. The mobile view is the simplest and arguably the most important, as it encompasses all the key user journeys, and often accounts for a higher proportion of user visits (depending on the project).
Prevents desktop-centric development. As development is done using desktop computers, it can be tempting to initially focus on the desktop view. But thinking about mobile from the start prevents us from getting stuck later on; no one wants to spend their time retrofitting a desktop-centric site to work on mobile devices!
Setting style declarations and then overwriting them at higher breakpoints can lead to undesirable ramifications:
More complexity. The farther up the breakpoint hierarchy you go, the more unnecessary code you inherit from lower breakpoints.
Higher CSS specificity. Styles that have been reverted to their browser default value in a class name declaration now have a higher specificity. This can be a headache on large projects when you want to keep the CSS selectors as simple as possible.
Requires more regression testing. Changes to the CSS at a lower view (like adding a new style) requires all higher breakpoints to be regression tested.
The browser can’t prioritize CSS downloads. At wider breakpoints, classic mobile-first min-width media queries don’t leverage the browser’s capability to download CSS files in priority order.
There is nothing inherently wrong with overwriting values; CSS was designed to do just that. Still, inheriting incorrect values is unhelpful and can be burdensome and inefficient. It can also lead to increased style specificity when you have to overwrite styles to reset them back to their defaults, something that may cause issues later on, especially if you are using a combination of bespoke CSS and utility classes. We won’t be able to use a utility class for a style that has been reset with a higher specificity.
With this in mind, I’m developing CSS with a focus on the default values much more these days. Since there’s no specific order, and no chains of specific values to keep track of, this frees me to develop breakpoints simultaneously. I concentrate on finding common styles and isolating the specific exceptions in closed media query ranges (that is, any range with a max-width set).
This approach opens up some opportunities, as you can look at each breakpoint as a clean slate. If a component’s layout looks like it should be based on Flexbox at all breakpoints, it’s fine and can be coded in the default style sheet. But if it looks like Grid would be much better for large screens and Flexbox for mobile, these can both be done entirely independently when the CSS is put into closed media query ranges. Also, developing simultaneously requires you to have a good understanding of any given component in all breakpoints up front. This can help surface issues in the design earlier in the development process. We don’t want to get stuck down a rabbit hole building a complex component for mobile, and then get the designs for desktop and find they are equally complex and incompatible with the HTML we created for the mobile view!
Though this approach isn’t going to suit everyone, I encourage you to give it a try. There are plenty of tools out there to help with concurrent development, such as Responsively App, Blisk, and many others.
Having said that, I don’t feel the order itself is particularly relevant. If you are comfortable with focusing on the mobile view, have a good understanding of the requirements for other breakpoints, and prefer to work on one device at a time, then by all means stick with the classic development order. The important thing is to identify common styles and exceptions so you can put them in the relevant stylesheet—a sort of manual tree-shaking process! Personally, I find this a little easier when working on a component across breakpoints, but that’s by no means a requirement.
In classic mobile-first CSS we overwrite the styles, but we can avoid this by using media query ranges. To illustrate the difference (I’m using SCSS for brevity), let’s assume there are three visual designs:
Take a simple example where a block-level element has a default padding of “20px,” which is overwritten at tablet to be “40px” and set back to “20px” on desktop.
Classic |
Closed media query range |
The subtle difference is that the mobile-first example sets the default padding to “20px” and then overwrites it at each breakpoint, setting it three times in total. In contrast, the second example sets the default padding to “20px” and only overrides it at the relevant breakpoint where it isn’t the default value (in this instance, tablet is the exception).
The goal is to:
To this end, closed media query ranges are our best friend. If we need to make a change to any given view, we make it in the CSS media query range that applies to the specific breakpoint. We’ll be much less likely to introduce unwanted alterations, and our regression testing only needs to focus on the breakpoint we have actually edited.
Taking the above example, if we find that .my-block spacing on desktop is already accounted for by the margin at that breakpoint, and since we want to remove the padding altogether, we could do this by setting the mobile padding in a closed media query range.
.my-block {
@media (max-width: 767.98px) {
padding: 20px;
}
@media (min-width: 768px) and (max-width: 1023.98px) {
padding: 40px;
}
}
The browser default padding for our block is “0,” so instead of adding a desktop media query and using unset or “0” for the padding value (which we would need with mobile-first), we can wrap the mobile padding in a closed media query (since it is now also an exception) so it won’t get picked up at wider breakpoints. At the desktop breakpoint, we won’t need to set any padding style, as we want the browser default value.
Back in the day, keeping the number of requests to a minimum was very important due to the browser’s limit of concurrent requests (typically around six). As a consequence, the use of image sprites and CSS bundling was the norm, with all the CSS being downloaded in one go, as one stylesheet with highest priority.
With HTTP/2 and HTTP/3 now on the scene, the number of requests is no longer the big deal it used to be. This allows us to separate the CSS into multiple files by media query. The clear benefit of this is the browser can now request the CSS it currently needs with a higher priority than the CSS it doesn’t. This is more performant and can reduce the overall time page rendering is blocked.
To determine which version of HTTP you’re using, go to your website and open your browser’s dev tools. Next, select the Network tab and make sure the Protocol column is visible. If “h2” is listed under Protocol, it means HTTP/2 is being used.
Note: to view the Protocol in your browser’s dev tools, go to the Network tab, reload your page, right-click any column header (e.g., Name), and check the Protocol column.
Also, if your site is still using HTTP/1...WHY?!! What are you waiting for? There is excellent user support for HTTP/2.
Separating the CSS into individual files is a worthwhile task. Linking the separate CSS files using the relevant media attribute allows the browser to identify which files are needed immediately (because they’re render-blocking) and which can be deferred. Based on this, it allocates each file an appropriate priority.
In the following example of a website visited on a mobile breakpoint, we can see the mobile and default CSS are loaded with “Highest” priority, as they are currently needed to render the page. The remaining CSS files (print, tablet, and desktop) are still downloaded in case they’ll be needed later, but with “Lowest” priority.
With bundled CSS, the browser will have to download the CSS file and parse it before rendering can start.
While, as noted, with the CSS separated into different files linked and marked up with the relevant media attribute, the browser can prioritize the files it currently needs. Using closed media query ranges allows the browser to do this at all widths, as opposed to classic mobile-first min-width queries, where the desktop browser would have to download all the CSS with Highest priority. We can’t assume that desktop users always have a fast connection. For instance, in many rural areas, internet connection speeds are still slow.
The media queries and number of separate CSS files will vary from project to project based on project requirements, but might look similar to the example below.
|
Bundled CSS This single file contains all the CSS, including all media queries, and it will be downloaded with Highest priority. |
Separated CSS Separating the CSS and specifying a |
Depending on the project’s deployment strategy, a change to one file (mobile.css, for example) would only require the QA team to regression test on devices in that specific media query range. Compare that to the prospect of deploying the single bundled site.css file, an approach that would normally trigger a full regression test.
The uptake of mobile-first CSS was a really important milestone in web development; it has helped front-end developers focus on mobile web applications, rather than developing sites on desktop and then attempting to retrofit them to work on other devices.
I don’t think anyone wants to return to that development model again, but it’s important we don’t lose sight of the issue it highlighted: that things can easily get convoluted and less efficient if we prioritize one particular device—any device—over others. For this reason, focusing on the CSS in its own right, always mindful of what is the default setting and what’s an exception, seems like the natural next step. I’ve started noticing small simplifications in my own CSS, as well as other developers’, and that testing and maintenance work is also a bit more simplified and productive.
In general, simplifying CSS rule creation whenever we can is ultimately a cleaner approach than going around in circles of overrides. But whichever methodology you choose, it needs to suit the project. Mobile-first may—or may not—turn out to be the best choice for what’s involved, but first you need to solidly understand the trade-offs you’re stepping into.
As a UX professional in today’s data-driven landscape, it’s increasingly likely that you’ve been asked to design a personalized digital experience, whether it’s a public website, user portal, or native application. Yet while there continues to be no shortage of marketing hype around personalization platforms, we still have very few standardized approaches for implementing personalized UX.
That’s where we come in. After completing dozens of personalization projects over the past few years, we gave ourselves a goal: could you create a holistic personalization framework specifically for UX practitioners? The Personalization Pyramid is a designer-centric model for standing up human-centered personalization programs, spanning data, segmentation, content delivery, and overall goals. By using this approach, you will be able to understand the core components of a contemporary, UX-driven personalization program (or at the very least know enough to get started).
For the sake of this article, we’ll assume you’re already familiar with the basics of digital personalization. A good overview can be found here: Website Personalization Planning. While UX projects in this area can take on many different forms, they often stem from similar starting points.
Common scenarios for starting a personalization project:
Regardless of where you begin, a successful personalization program will require the same core building blocks. We’ve captured these as the “levels” on the pyramid. Whether you are a UX designer, researcher, or strategist, understanding the core components can help make your contribution successful.
From top to bottom, the levels include:
We’ll go through each of these levels in turn. To help make this actionable, we created an accompanying deck of cards to illustrate specific examples from each level. We’ve found them helpful in personalization brainstorming sessions, and will include examples for you here.
The components of the pyramid are as follows:
A north star is what you are aiming for overall with your personalization program (big or small). The North Star defines the (one) overall mission of the personalization program. What do you wish to accomplish? North Stars cast a shadow. The bigger the star, the bigger the shadow. Example of North Starts might include:
As in any good UX design, personalization can help accelerate designing with customer intentions. Goals are the tactical and measurable metrics that will prove the overall program is successful. A good place to start is with your current analytics and measurement program and metrics you can benchmark against. In some cases, new goals may be appropriate. The key thing to remember is that personalization itself is not a goal, rather it is a means to an end. Common goals include:
Touchpoints are where the personalization happens. As a UX designer, this will be one of your largest areas of responsibility. The touchpoints available to you will depend on how your personalization and associated technology capabilities are instrumented, and should be rooted in improving a user’s experience at a particular point in the journey. Touchpoints can be multi-device (mobile, in-store, website) but also more granular (web banner, web pop-up etc.). Here are some examples:
Channel-level Touchpoints
Wireframe-level Touchpoints
If you’re designing for web interfaces, for example, you will likely need to include personalized “zones” in your wireframes. The content for these can be presented programmatically in touchpoints based on our next step, contexts and campaigns.
Once you’ve outlined some touchpoints, you can consider the actual personalized content a user will receive. Many personalization tools will refer to these as “campaigns” (so, for example, a campaign on a web banner for new visitors to the website). These will programmatically be shown at certain touchpoints to certain user segments, as defined by user data. At this stage, we find it helpful to consider two separate models: a context model and a content model. The context helps you consider the level of engagement of the user at the personalization moment, for example a user casually browsing information vs. doing a deep-dive. Think of it in terms of information retrieval behaviors. The content model can then help you determine what type of personalization to serve based on the context (for example, an “Enrich” campaign that shows related articles may be a suitable supplement to extant content).
Personalization Context Model:
Personalization Content Model:
We’ve written extensively about each of these models elsewhere, so if you’d like to read more you can check out Colin’s Personalization Content Model and Jeff’s Personalization Context Model.
User segments can be created prescriptively or adaptively, based on user research (e.g. via rules and logic tied to set user behaviors or via A/B testing). At a minimum you will likely need to consider how to treat the unknown or first-time visitor, the guest or returning visitor for whom you may have a stateful cookie (or equivalent post-cookie identifier), or the authenticated visitor who is logged in. Here are some examples from the personalization pyramid:
Every organization with any digital presence has data. It’s a matter of asking what data you can ethically collect on users, its inherent reliability and value, as to how can you use it (sometimes known as “data activation.”) Fortunately, the tide is turning to first-party data: a recent study by Twilio estimates some 80% of businesses are using at least some type of first-party data to personalize the customer experience.
First-party data represents multiple advantages on the UX front, including being relatively simple to collect, more likely to be accurate, and less susceptible to the “creep factor” of third-party data. So a key part of your UX strategy should be to determine what the best form of data collection is on your audiences. Here are some examples:
There is a progression of profiling when it comes to recognizing and making decisioning about different audiences and their signals. It tends to move towards more granular constructs about smaller and smaller cohorts of users as time and confidence and data volume grow.
While some combination of implicit / explicit data is generally a prerequisite for any implementation (more commonly referred to as first party and third-party data) ML efforts are typically not cost-effective directly out of the box. This is because a strong data backbone and content repository is a prerequisite for optimization. But these approaches should be considered as part of the larger roadmap and may indeed help accelerate the organization’s overall progress. Typically at this point you will partner with key stakeholders and product owners to design a profiling model. The profiling model includes defining approach to configuring profiles, profile keys, profile cards and pattern cards. A multi-faceted approach to profiling which makes it scalable.
While the cards comprise the starting point to an inventory of sorts (we provide blanks for you to tailor your own), a set of potential levers and motivations for the style of personalization activities you aspire to deliver, they are more valuable when thought of in a grouping.
In assembling a card “hand”, one can begin to trace the entire trajectory from leadership focus down through a strategic and tactical execution. It is also at the heart of the way both co-authors have conducted workshops in assembling a program backlog—which is a fine subject for another article.
In the meantime, what is important to note is that each colored class of card is helpful to survey in understanding the range of choices potentially at your disposal, it is threading through and making concrete decisions about for whom this decisioning will be made: where, when, and how.
Any sustainable personalization strategy must consider near, mid and long-term goals. Even with the leading CMS platforms like Sitecore and Adobe or the most exciting composable CMS DXP out there, there is simply no “easy button” wherein a personalization program can be stood up and immediately view meaningful results. That said, there is a common grammar to all personalization activities, just like every sentence has nouns and verbs. These cards attempt to map that territory.
Humility, a designer’s essential value—that has a nice ring to it. What about humility, an office manager’s essential value? Or a dentist’s? Or a librarian’s? They all sound great. When humility is our guiding light, the path is always open for fulfillment, evolution, connection, and engagement. In this chapter, we’re going to talk about why.
That said, this is a book for designers, and to that end, I’d like to start with a story—well, a journey, really. It’s a personal one, and I’m going to make myself a bit vulnerable along the way. I call it:
When I was coming out of art school, a long-haired, goateed neophyte, print was a known quantity to me; design on the web, however, was rife with complexities to navigate and discover, a problem to be solved. Though I had been formally trained in graphic design, typography, and layout, what fascinated me was how these traditional skills might be applied to a fledgling digital landscape. This theme would ultimately shape the rest of my career.
So rather than graduate and go into print like many of my friends, I devoured HTML and JavaScript books into the wee hours of the morning and taught myself how to code during my senior year. I wanted—nay, needed—to better understand the underlying implications of what my design decisions would mean once rendered in a browser.
The late ’90s and early 2000s were the so-called “Wild West” of web design. Designers at the time were all figuring out how to apply design and visual communication to the digital landscape. What were the rules? How could we break them and still engage, entertain, and convey information? At a more macro level, how could my values, inclusive of humility, respect, and connection, align in tandem with that? I was hungry to find out.
Though I’m talking about a different era, those are timeless considerations between non-career interactions and the world of design. What are your core passions, or values, that transcend medium? It’s essentially the same concept we discussed earlier on the direct parallels between what fulfills you, agnostic of the tangible or digital realms; the core themes are all the same.
First within tables, animated GIFs, Flash, then with Web Standards, divs, and CSS, there was personality, raw unbridled creativity, and unique means of presentment that often defied any semblance of a visible grid. Splash screens and “browser requirement” pages aplenty. Usability and accessibility were typically victims of such a creation, but such paramount facets of any digital design were largely (and, in hindsight, unfairly) disregarded at the expense of experimentation.
For example, this iteration of my personal portfolio site (“the pseudoroom”) from that era was experimental, if not a bit heavy- handed, in the visual communication of the concept of a living sketchbook. Very skeuomorphic. I collaborated with fellow designer and dear friend Marc Clancy (now a co-founder of the creative project organizing app Milanote) on this one, where we’d first sketch and then pass a Photoshop file back and forth to trick things out and play with varied user interactions. Then, I’d break it down and code it into a digital layout.
Along with design folio pieces, the site also offered free downloads for Mac OS customizations: desktop wallpapers that were effectively design experimentation, custom-designed typefaces, and desktop icons.
From around the same time, GUI Galaxy was a design, pixel art, and Mac-centric news portal some graphic designer friends and I conceived, designed, developed, and deployed.
Design news portals were incredibly popular during this period, featuring (what would now be considered) Tweet-size, small-format snippets of pertinent news from the categories I previously mentioned. If you took Twitter, curated it to a few categories, and wrapped it in a custom-branded experience, you’d have a design news portal from the late 90s / early 2000s.
We as designers had evolved and created a bandwidth-sensitive, web standards award-winning, much more accessibility-conscious website. Still ripe with experimentation, yet more mindful of equitable engagement. You can see a couple of content panes here, noting general news (tech, design) and Mac-centric news below. We also offered many of the custom downloads I cited before as present on my folio site but branded and themed to GUI Galaxy.
The site’s backbone was a homegrown CMS, with the presentation layer consisting of global design + illustration + news author collaboration. And the collaboration effort here, in addition to experimentation on a ‘brand’ and content delivery, was hitting my core. We were designing something bigger than any single one of us and connecting with a global audience.
Collaboration and connection transcend medium in their impact, immensely fulfilling me as a designer.
Now, why am I taking you down this trip of design memory lane? Two reasons.
First, there’s a reason for the nostalgia for that design era (the “Wild West” era, as I called it earlier): the inherent exploration, personality, and creativity that saturated many design portals and personal portfolio sites. Ultra-finely detailed pixel art UI, custom illustration, bespoke vector graphics, all underpinned by a strong design community.
Today’s web design has been in a period of stagnation. I suspect there’s a strong chance you’ve seen a site whose structure looks something like this: a hero image / banner with text overlaid, perhaps with a lovely rotating carousel of images (laying the snark on heavy there), a call to action, and three columns of sub-content directly beneath. Maybe an icon library is employed with selections that vaguely relate to their respective content.
Design, as it’s applied to the digital landscape, is in dire need of thoughtful layout, typography, and visual engagement that goes hand-in-hand with all the modern considerations we now know are paramount: usability. Accessibility. Load times and bandwidth- sensitive content delivery. A responsive presentation that meets human beings wherever they’re engaging from. We must be mindful of, and respectful toward, those concerns—but not at the expense of creativity of visual communication or via replicating cookie-cutter layouts.
Websites during this period were often designed and built on Macs whose OS and desktops looked something like this. This is Mac OS 7.5, but 8 and 9 weren’t that different.
Desktop icons fascinated me: how could any single one, at any given point, stand out to get my attention? In this example, the user’s desktop is tidy, but think of a more realistic example with icon pandemonium. Or, say an icon was part of a larger system grouping (fonts, extensions, control panels)—how did it also maintain cohesion amongst a group?
These were 32 x 32 pixel creations, utilizing a 256-color palette, designed pixel-by-pixel as mini mosaics. To me, this was the embodiment of digital visual communication under such ridiculous constraints. And often, ridiculous restrictions can yield the purification of concept and theme.
So I began to research and do my homework. I was a student of this new medium, hungry to dissect, process, discover, and make it my own.
Expanding upon the notion of exploration, I wanted to see how I could push the limits of a 32×32 pixel grid with that 256-color palette. Those ridiculous constraints forced a clarity of concept and presentation that I found incredibly appealing. The digital gauntlet had been tossed, and that challenge fueled me. And so, in my dorm room into the wee hours of the morning, I toiled away, bringing conceptual sketches into mini mosaic fruition.
These are some of my creations, utilizing the only tool available at the time to create icons called ResEdit. ResEdit was a clunky, built-in Mac OS utility not really made for exactly what we were using it for. At the core of all of this work: Research. Challenge. Problem- solving. Again, these core connection-based values are agnostic of medium.
There’s one more design portal I want to talk about, which also serves as the second reason for my story to bring this all together.
This is K10k, short for Kaliber 1000. K10k was founded in 1998 by Michael Schmidt and Toke Nygaard, and was the design news portal on the web during this period. With its pixel art-fueled presentation, ultra-focused care given to every facet and detail, and with many of the more influential designers of the time who were invited to be news authors on the site, well… it was the place to be, my friend. With respect where respect is due, GUI Galaxy’s concept was inspired by what these folks were doing.
For my part, the combination of my web design work and pixel art exploration began to get me some notoriety in the design scene. Eventually, K10k noticed and added me as one of their very select group of news authors to contribute content to the site.
Amongst my personal work and side projects—and now with this inclusion—in the design community, this put me on the map. My design work also began to be published in various printed collections, in magazines domestically and overseas, and featured on other design news portals. With that degree of success while in my early twenties, something else happened:
I evolved—devolved, really—into a colossal asshole (and in just about a year out of art school, no less). The press and the praise became what fulfilled me, and they went straight to my head. They inflated my ego. I actually felt somewhat superior to my fellow designers.
The casualties? My design stagnated. Its evolution—my evolution— stagnated.
I felt so supremely confident in my abilities that I effectively stopped researching and discovering. When previously sketching concepts or iterating ideas in lead was my automatic step one, I instead leaped right into Photoshop. I drew my inspiration from the smallest of sources (and with blinders on). Any critique of my work from my peers was often vehemently dismissed. The most tragic loss: I had lost touch with my values.
My ego almost cost me some of my friendships and burgeoning professional relationships. I was toxic in talking about design and in collaboration. But thankfully, those same friends gave me a priceless gift: candor. They called me out on my unhealthy behavior.
Admittedly, it was a gift I initially did not accept but ultimately was able to deeply reflect upon. I was soon able to accept, and process, and course correct. The realization laid me low, but the re-awakening was essential. I let go of the “reward” of adulation and re-centered upon what stoked the fire for me in art school. Most importantly: I got back to my core values.
Following that short-term regression, I was able to push forward in my personal design and career. And I could self-reflect as I got older to facilitate further growth and course correction as needed.
As an example, let’s talk about the Large Hadron Collider. The LHC was designed “to help answer some of the fundamental open questions in physics, which concern the basic laws governing the interactions and forces among the elementary objects, the deep structure of space and time, and in particular the interrelation between quantum mechanics and general relativity.” Thanks, Wikipedia.
Around fifteen years ago, in one of my earlier professional roles, I designed the interface for the application that generated the LHC’s particle collision diagrams. These diagrams are the rendering of what’s actually happening inside the Collider during any given particle collision event and are often considered works of art unto themselves.
Designing the interface for this application was a fascinating process for me, in that I worked with Fermilab physicists to understand what the application was trying to achieve, but also how the physicists themselves would be using it. To that end, in this role,
I cut my teeth on usability testing, working with the Fermilab team to iterate and improve the interface. How they spoke and what they spoke about was like an alien language to me. And by making myself humble and working under the mindset that I was but a student, I made myself available to be a part of their world to generate that vital connection.
I also had my first ethnographic observation experience: going to the Fermilab location and observing how the physicists used the tool in their actual environment, on their actual terminals. For example, one takeaway was that due to the level of ambient light-driven contrast within the facility, the data columns ended up using white text on a dark gray background instead of black text-on-white. This enabled them to pore over reams of data during the day and ease their eye strain. And Fermilab and CERN are government entities with rigorous accessibility standards, so my knowledge in that realm also grew. The barrier-free design was another essential form of connection.
So to those core drivers of my visual problem-solving soul and ultimate fulfillment: discovery, exposure to new media, observation, human connection, and evolution. What opened the door for those values was me checking my ego before I walked through it.
An evergreen willingness to listen, learn, understand, grow, evolve, and connect yields our best work. In particular, I want to focus on the words ‘grow’ and ‘evolve’ in that statement. If we are always students of our craft, we are also continually making ourselves available to evolve. Yes, we have years of applicable design study under our belt. Or the focused lab sessions from a UX bootcamp. Or the monogrammed portfolio of our work. Or, ultimately, decades of a career behind us.
But all that said: experience does not equal “expert.”
As soon as we close our minds via an inner monologue of ‘knowing it all’ or branding ourselves a “#thoughtleader” on social media, the designer we are is our final form. The designer we can be will never exist.