谷歌自动翻译 》

Advent Calendars For Web Designers And Developers (December 2021 Edition)

Once again, the web community has been busy with creating some fantastic advent calendars this year. As you’ll see, each and every one of these calendars are sure to cater for a daily dose of web design and development goodness with stellar articles, inspiring experiments, and even puzzles to solve.

It doesn’t really matter if you’re a front-end dev, UX designer or content strategist, we’re certain you’ll find at least something to inspire you for the upcoming year. Use this month of December as a time to slow down, and your time to reflect and plan ahead — you won’t regret it.

Advent of JavaScript

If you sign up to the Advent of JavaScript, you’ll be getting an email every day that outlined a JavaScript challenge. Each of the given challenges include all of the HTML and CSS you need to get started, allowing you to focus on the JavaScript. You’ll also receive a brief on how to get started, ways to push yourself, and steps to help you get started. You can get the challenges for free (or pay for the solutions).

Advent of CSS

For folks who’re more into CSS, there’s the Advent of CSS where you can sign up for a daily email outlining a CSS challenge that includes all the assets you need to get started — including a Figma design file. (If you don’t have a Figma account, don’t worry, it’s free.) Before accepting this challenge, you really should know basic HTML and CSS.

JVM Programming Advent Calendar

The Java Advent 2021 is here! To make the advent season even sweeter for JVM enthusiasts, there will be a new article about JVM-related topic everyday. The project started in 2012 with the idea of providing technical content during the Christmas Advent period, so keep looking for nice things under the Java Christmas tree! 🎄

Advent of Code

If you prefer a puzzle over an article, take a look at Advent of Code. Created by Eric Wastl, this is an advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like. You don’t need a computer science background to participate — just a little programming knowledge and some problem solving skills will get you pretty far. Go ahead and give it a go!

Perl 6/Raku Advent Calendar

Back in October of 2019, “Perl 6” was renamed to “Raku”. It’s the 6th year since (what was then called) Perl 6 was released, and the 13th year in a row for this Raku Advent calendar. Stay tuned for lots of articles on metaprogramming, applications, useful Raku modules, programming techniques, guides on how to work with Raku inside containers, and even how to migrate from good ol’ Perl.

24 Pull Requests

24 Pull Requests is a yearly initiative to encourage contributors around the world to send 24 pull requests between December 1st and December 24th. The project is available in twenty languages, and encourages all kinds of contributions to open-source projects — including non-pull-request contributions. There’s a new contribution form on the site that allows you to record the contributions you’ve made each day that wouldn’t usually make sense as a pull request. Join in!

HTMHell Advent Calendar

If you’re already familiar with the HTMHell website, then you can guess how interesting its advent calendar is going to get! Take a peek behind each door of the HTMHell calendar where you’ll find an article, talk or tool that focuses on HTML. To be fair, HTMHell isn’t just about bad practices — Manuel also shares good practices and useful HTML tips and tricks. 🔥

PerfPlanet Calendar

An advent calendar that has been publishing since 2009 is back again. Good ol’ PerfPlanet is back for another season with all things speed and web performance. Anyone is welcome to contribute to the calendar, so do feel free to reach out with a topic or tool you’re passionate about, or a technique you’d like to teach and tell the web performance community about.

C# Advent Calendar

It’s time for the fifth annual C# advent calendar that will feature two pieces of content every day. Anyone can contribute by sharing their blog posts, videos, articles or podcast episodes dedicated to C# development. In case all of the spots are already claimed, you can always sign up to be a substitute author. Rock on! 🎸

Inclusive Design 24

The good folks at Inclusive Design 24 are sharing their favorite talks from previous years of the good ol’ #id24 online-only conferences while counting down the days until the New Year. All videos have even been manually re-captioned, just so they’re all at their best.

Lean UXMas

Lean UXMas has been publishing each advent since 2014 and is a collection of the most popular articles from this year’s Agile and Lean UX latest news. If you find yourself impatiently waiting for the next article to be posted, you can always check out the previous advent calendars smashing the year in the base URL, or simply search for them below the website’s header.

Code Security Advent Calendar

If you’re up for a challenge that involves spotting security vulnerabilities, then the Code Security Advent Calendar is just the right one for you. Every day, there will be a code security puzzle and/or riddle announced on Twitter to which you’re welcome to join and share with your friends to discuss solutions together. The most active players with the best solutions will be contacted to receive a cool swag pack. 🎁

Advent of Cyber

Security can be a daunting field. With Advent of Cyber, you can get started with Cyber Security by learning the basics and completing a new, beginner friendly security exercise every day. For each task you get correct, you get a raffle ticket and on the 26th December, meaning the more questions you answer, the more chance you have of winning. Every day you complete a challenge, you get entered into another prize draw for the chance to win a mini-prize. So, what are you waiting for?

24 Days In December

“PHP is not just a language. PHP is a group of people, a community of developers who build things for the web. The PHPamily spans the globe, and while we might not always agree or get along, we have one thing in common, we’re passionate about what we do.” Jonathan Bossenger hits the nail right on the head as he welcomes everyone to participate in the 6th edition of 24 Days in December. We’re all look forward to hearing your personal journey and stories with PHP! 🌈

Umbraco Christmas Calendar

It’s the 10th year of 24 Days In Umbraco and it’s time to learn more about Umbraco (otherwise known as the ‘Friendly CMS’). If you’re interested in it but not sure where to start, you can always check out the articles by tag(s) and find the answers to your questions. The calendar was first started back in 2012 so there’s plenty of content to sift through.

Festive Tech Calendar 2021

With over 2K subscribers on YouTube, the Festive Tech Calendar is back at it again this year with videos from different communities and people around the globe. As you’ll see, you’ll quickly be able to find an entire collection of videos from all of the previous years, and topics as well as the diversity of speakers both don’t fall short indeed. By the communities, for the communities indeed.

SysAdvent

SysAdvent is back this year! 🙌 With the goals of sharing, openness and mentoring, you’re in for some great articles about systems administration topics written by fellow sysadmins. Tune in each day for an article that explores the wide range of topics in system administration.

IT Security Advent Calendar

“Don't store sensitive data in the cloud; keep it entirely disconnected from the web.” Yup, that’s the credo delivered in the first advent door of the good ol’ IT Security Advent Calendar this year. Counting down to Christmas, this calendar is dedicated to sharing a new tip for protecting your devices, networks, and data each day.

Bekk Christmas

This year’s Bekk Christmas features opinion pieces, tutorials, podcasts, deep dives and lots of other formats. Pick the ones that seem interesting to you, and consume them whenever you like. It’s worth digging through the archives (see e.g. 2020) — there’s a golden gem hidden in each one of them!



It’s nice to find some calendars in languages other than English, too! Here are a few we stumbled upon:

24 Jours De Web (French)

24 Jours De Web is a lovely French calendar which first appeared back in 2012, and has been continuing the lovely tradition of online advent calendars ever since. 24 authors come together each year and publish an article on UX, accessibility, privacy, and other topics related to the good ol’ web.

SELFHTML Adventskalender (German)

This year’s SELFHTML Adventskalender is dedicated to accessibility — a topic that concerns everyone. Why? Because accessibility is good for all of us. Accessible websites are simply better websites. At the end of the day, everyone in the world hits a large key faster and more reliably than a small key. To all the German-speaking developers out there, you’ll understand why it’s important to include accessibility as much as possible. Also, make sure to bookmark the SELFHTML wiki so you can have the latest documentations and tutorials at hand.

WEBアクセシビリティ Advent Calendar (Japanese)

This Japanese advent calendar has been running since 2013 and is moderated by @hokaccha. Its focus lies on web accessibility, with a new author exploring a topic each day — from web accessibility to all the different types of programming languages you may want to explore for your projects. Once logged in, you can save a spot on the calendar and have your article or work published on that particular day.

Kodekalender (Norwegian)

Knowit is one of the Nordic region's leading consulting companies. They have once again brought their Norwegian calendar to life, and it is just the kind of holiday calendar for those of you who love programming. Behind each hatch hides a task you have to answer in the form of a simple text string or a number. The hatches vary in degree of difficulty and design, but common to all is that they are best suited for solving with code. Solve as many slots as possible to increase your chances of winning! Good luck! 🙌


Do you happen to know any other advent calendars that have been created in languages other than in English? Please do feel free to reach out to me on Twitter and I’ll be sure to add them to this list.

Oldies But Goodies

Christmas Experiments (2018)

Christmas Experiments started back in 2012, with the goal to deliver great experiments and highlight top web creative as well as newcomers. It was a pretty cool WebGL advent calendar that featured a daily new experiment that was quite obviously made with love by digital artists. Unfortunately, it did not continue after the 2018 edition.

24 Accessibility (2019)

An advent calendar we surely miss is the 24 Accessibility. The site hasn’t had a new article since 2019, but still offers a good resource of articles on all subjects related to digital accessibility. Whether you are new to accessibility or a veteran, a developer, designer, user experience professional, quality assurance analyst or project manager, you’ll find an article of interest during the run of the series.

It’s A Shape Christmas (2019)

It’s A Shape Christmas is a digital calendar that counts down to Christmas and reveals a bespoke illustration each day themed around four different shapes (Square, Triangle, Circle and Hexagon) and Christmas. The project was started in 2011 by a UK design agency called Made by Shape. The website still showcases some of the best from the previous seasons. I’m sure you’ll agree: they’re all just too good not to be shared! ✨

24 Ways (2019)

First initiated by Drew McLellan, 24 ways started out as a simple website that published a new tip or trick each day leading readers through December up until Christmas. It launched in 2005 and still has all of the calendars available online. Unfortunately, the last one was published in 2019 and will be taking a well-earned break after that year’s “final countdown”.

Perl Advent (2020)

The Perl Advent started back in 2000 and is perhaps the longest running web advent calendar that many know of. You’ll find insightful articles written by diverse author submissions from all types of Perl programming levels. A different Perl module will be featured each day for the twenty four days of advent, and an extra module on Christmas day. Make sure to go through the previous Perl advent calendars — it’s worth it.

PWAdvent (2020)

PWAdvent is a nice advent calendar for everyone who’s excited about the web platform and Progressive Web Apps, of course. Take a look at all the great stuff the web has to offer in last year’s calendar, in which a new progressive browser feature was introduced every day by Nico Martin himself and others.

A11y Advent Calendar (2020)

Heydon Pickering once said, “Accessibility is not about doing more work but about doing the right work.” Last year, Kitty Giraudel decided to publish an accessibility tip a day in their very own #A11yAdvent. Some of the tips are probably common knowledge for many, yet each of the posts cover so many of the important aspects of accessibility that will still hold true for years to come.

Last But Not Least...

Of course, we wanted to join in the fun ourselves and brought our very own #SmashingAdvent to life! As you already probably know, the Smashing team has been organizing conferences and events since 2012, so there are plenty of gems to shine the spotlight on. Do give @SmashingConf a follow on Twitter where we’ll be sharing our favorite talks and interviews with speakers from all over the globe.

On behalf of the entire Smashing team, we’d like to say thank you to each and every one involved in these projects — we see you! The communities in our web industry wouldn’t be able to learn so much and thrive if it wasn’t for your time, hard work and dedication. We all sincerely and truly appreciate each and every one of you. 🙏

And of course, if there’s a calendar that isn’t mentioned here, please do post it in comments section below.

3 Dec 2021 | 3:00 am

How To Protect Your API Key In Production With Next.js API Route

Front-end developers have to interact with private or public APIs whose method of authorization requires a secret key/API key that enables developers to use these APIs. The key(s) are important, hence the need to store/protect the key(s) arises. Creating an environment variable that stores the key is the “go-to” solution that most developers tend to embrace, but there’s a catch. The environment variable does not protect the key(s) from anyone that knows their way around the dev-tools of their browser. That’s why we need to use our keys at server-side when we’re writing our API calls.

In this article, we’ll be using Next.js to bootstrap our app. This does not mean that the create-react-app library will not work. You can make use of any one that you find convenient. We’re using Next.js because of the many perks that come with it. (You can read more about Next.js here.)

Let us start by installing the dependencies that we need in this project. We’ll start by creating a Next.js app. The command below does that for us:

npx create-next-app [name-of-your-app]

We’ll make use of the native JavaScript "Fetch API" library to get data from the API. We won’t be covering much of the styling aspect in this article. (If you want to take a look at an example project I built using the Next.js API route pattern, you can find the repository here.)

Now let’s have a look at the file structure of the app. We’ll be focusing on the important files needed in this app, so it’ll be concise.

|--pages
|   |-- api
|   |   |-- serverSideCall.js  
|   |-- _app.js
|   |-- index.js
|__ .env.local
Breakdown Of The Files In The App Structure

In this section, we are going to see the different files that make up the architecture of this project, and their respective functions below.

The pages directory is where all the routing of the app takes place. This is an out-of-the-box feature of Next.js. It saves you the stress of hard hard-coding your independent routes.

  • pages/api
    The api directory enables you to have a backend for your Next.js app, inside the same codebase, instead of the common way of creating separate repositories for your REST or GraphQL APIs and deploying them on backend hosting platforms like Heroku, and so on.

    With the api directory, every file is treated as an API endpoint. If you look at the api folder, you’ll notice that we have a file called user.js in it.

    That file becomes an endpoint, which means an API call can be performed using the path to the file as the base URL.

const getData = async() => {
  fetch("/api/users")
   .then(response => response())
   .then(response => console.log(response.data))
   .catch(err => console.log(err)
}
  • pages/_app.js
    It is where all our components get attached to the DOM. If you take a look at the component structure, you’ll see that all the components are passed as pageProps to the Component props too.
function MyApp({ Component, pageProps }) {
  return (
    <React.Fragment>
      <Head>
        <meta name="theme-color" content="#73e2a7" />
        <link rel="icon" type="image/ico" href="" />
      </Head>
      <Component {...pageProps} />
    </React.Fragment>
  );
}

export default MyApp;

If you are new to Next.js, kindly go through this article that will guide you through the process.

  • index.js
    It is the default route in the pages folder. When you run the command below, it starts up a development server and the contents of index.js are rendered on the web page.
npm run dev
  • .env.local
    It is where we’re storing the API key that’ll enable us to consume this API.
The Server-Side API Call

The previous section exposed you to the files that we’ll be interacting with and their specific functions. In this section, we will move on to how we can consume the API.

The reason why we’re writing the API call at the server-side is for securing our API key, and Next.js already makes it an easy task for us. With the API routes in Next.js, we can perform our API calls without the fear of our API keys being revealed on the client-side.

You may have been wondering what the essence of the environment variable in the .env file is, in this scenario.

The environment variable (which is our API key) can only be available in development mode.

That is why we can do something like process.env.api_key, and get access to the environment variable.

But, the moment you deploy your app to platforms like Netlify or Vercel, the mode changes to production, which makes the Node.js process object unavailable on the client-side.

Now that you have seen the reason why need to write a server-side API call. Let’s get to it right away.

export default async function serverSideCall(req, res) {
    const {
      query: { firstName, lastName },
    } = req;

    const baseUrl = `https://api.example-product.com/v1/search?
        lastName=${lastName}&firstName=${firstName}
        &apiKey=${process.env.KEY}
    `;
    const response = await fetch (baseUrl);
    res.status(200).json({
    data: response.data,
  });
}

In the snippet above, we created an asynchronous function called, serverSideCall. It takes in two arguments, req which stands for “request” in full, and res which is “response” in full.

The req argument has some properties, (or “middlewares” as the Next.js docs call it) that can be accessed when we’re consuming our API, one of them is req.query.

You’ll notice that we destructured the query property in the snippet above, so we should now be able to pass those variables as values to the query properties of the API endpoint. Take a look at it below.

Note: You can read more about the in-built middlewares that come with the req argument here.

const {
  query: { firstName, lastName },
} = req;

The base URL takes the destructured query properties as values and the apiKey is gotten from the .env file via the Node.js process object.

The destructured query properties are taken as requests that will be sent from the input values of the form component (which we’ll be creating in the next section) to the API, once it is received, we get a response that corresponds with the request we made.

const baseUrl = https://api.kelvindata.com/rest/v1/searchv2?  lastName=${lastName}&firstName=${firstName}&apiKey=${process.env.KEY};

The next process the function has to complete is the response from the asynchronous API call. The snippet below assigns the API call which we are performing with the axios library to a variable, response.

On the next line, the res argument uses the status method which is used to send a JSON response to us, then we can assign the response variable as a property of data.

const response = await axios.get(baseUrl);
res.status(200).json({
  data: response.data,
});

You can read more about the various HTTP status codes here.

Practical Usage Of The Server-Side API Function

In this section, we’ll have a look at how we can utilize the server-side API call by creating a form with two input fields. The input values will be sent as query parameters to the API endpoint.

import React from "react";

const Index = () => {
  const [data, setData] = React.useState([]);
  const [firstName, setFirstName] = React.useState("");
  const [lastName, setLastName] = React.useState("");

  const getuserData = async () => {
    // api call goes here
  };

  const handleSubmit = (e) => {
     e.preventDefault();
     getuserData();
  };

  return (
     <React.Fragment>
       <form onSubmit={handleSubmit}>
          <label htmlFor="firstname">First name</label>
          <input
            type="text"
            name="firstname"
            value={firstName}
            placeholder="First Name"
            onChange={(e) => setFirstName(e.target.value)}
          />
          <label htmlFor="lastname">Lastname</label>
          <input
            type="text"
            name="lastname"
            value={lastName}
            placeholder="Lastname"
            onChange={(e) => setLastName(e.target.value)}
          />
           <button>Search</button>
        </form>
        <div className="results-from-api"></div>
    </React.Fragment>
 );
};

export default Index;

Since this is a React component that is receiving data from an API endpoint, it should have an internal state of its own. The snippet below shows how we defined the different state variables with React Hooks.

const [userData, setUserData] = React.useState([]);
const [firstName, setFirstName] = React.useState("");
const [lastName, setLastName] = React.useState("");

The firstName and lastName variables will store the text values that are typed into the input field by anyone into the local state variables.

The data state variable helps us store the response that we get from the API call in an array, so we can use the JavaScript map() method to render the response on the webpage.

Below, we’re using axios to get data from the API endpoint. But here, the base URL is not a typical https:// URL, instead, it is the path to the file where we made the server-side API call before.

const getuserData = async () => {
fetch(`/api/usersfirstName=${firstName}&lastName=${lastName}`, {
       headers: {
         Accept: "application/json",
       },
})
  .then((response) => response)
  .then((response) => {
    setData(response.data.data); 
    console.log(response.data.data);
  })
  .catch((err) => console.log(err));
};

The same process in the serverSideCall.js file is repeated, but this time around with the necessary fetch API headers and assignment of the input state variables to the API query parameters.

Conclusion

There are other approaches that can help achieve this feat. Here are some of them:

  • Creating Netlify Lambda functions that’ll help protect your API keys on the client-side.
    This approach does it pretty much for you, but if you’re not a fan of writing so much code, it will help you get the little things done. The Next.js API route is your best bet in solving this issue.
  • Server rendering with Next.js to hide API keys.
    In this video, Ijemma Onwuzulike gives an explanation of how to get this done with server-side rendering. I recommend checking it out.

Thank you for reading this article. Kindly share it and also feel free to take a look at a practical example project that I built using the Next.js API route here.

Further Reading On Smashing Magazine

2 Dec 2021 | 10:30 pm

Smashing Podcast Episode 44 With Chris Ferdinandi: Is The Web Dead?

In this episode, we’re asking if changes to best practises over the last year have negatively impacted the web. Is it all downhill from here? Drew McLellan talks to expert Chris Ferdinandi to find out.

Show Notes

Weekly Update

Transcript

Drew McLellan: He’s the author of the Vanilla JavaScript Pocket Guide series, creator of the Vanilla JavaScript Academy Training Program and host of the Vanilla JavaScript Podcast. We last talked to him in July 2020, where we asked if modern best practices about for the web. So we know he’s still an expert in Vanilla JS, but did you know he’s solely responsible for New Zealand being missing from 50% of world maps? My smashing friends, please welcome back, Chris Ferdinandi. Hi Chris, how are you?

Chris Ferdinandi: Oh, I’m smashing. Thanks for having me Drew. Interesting thing. I actually make sure New Zealand is not on maps because it’s probably my favorite country in the whole world and I don’t want too many people to know about it.

Drew: You want it to remain unspoiled.

Chris: Indeed.

Drew: So welcome back to the podcast. Last time we talked, we posed this question of if modern best practices, the use of reactive frameworks and these sorts of things were actually bad for the progress of the web. And I don’t know whether it was a controversial episode or it just struck a chord with a lot of listeners, but that conversation has been one of the most shared and listened to episodes that we’ve put out that smashing.

Chris: Oh, that’s awesome.

Drew: It’s actually been more than a year now, 15 months since we recorded that, which at the pace the web moves is like literally forever. So I wanted to ask, has anything changed? Is the web still in a terminal decline? Has the needle shifted at all?

Chris: Yeah, quite a bit has changed quite a bit has not. So I think, it’s so weird. The web technology changes so fast, but the web itself tends to move a little bit slower just in terms of developer trends and habits. And so you see these slightly longer arcs where you’ll have a bunch of technology pile up around one approach and then it’ll slowly start to swing the other way and then change all at once. And so last time we talked, I think one of the big kind of... Well, I had two big points related to the modern web. The first was, we’re using a lot of tools that give developers convenience, but we’re using those tools at the expense of the user. So we’re throwing a ton of client-side JS at people, and that introduces a ton of agility and performance issues.

Chris: The other big point that I was really hammering on was that these tools don’t necessarily improve the developer experience as much as I think people think they do. They do for some people. And I think for another segment of the front end professionals it actually can make things a little bit worse. But what I’m starting to see happen now, and one of the things I’d love to dig into a little bit more is I think we’re seeing a new, it’s almost like a second generation of tools that take a lot of the developer benefits that these client-side frameworks bring and strip away the punishing effects that we put on our users as a result. So it’s taking those same concepts and tools and packaging them a little bit differently in a way that’s actually better for the front end.

Chris: So one of the things I’ve been talking about with people lately is this idea that modern development has broken the web, but it’s also starting to fix it. And so we can definitely dig into that in a bunch of different angles, depending on where you want to take this conversation.

Drew: Sure. What sort of things have you seen in the last year that really stand out from that point of view?

Chris: Yeah, so the two biggest trends I’ve noticed are the rise of microframeworks. So where we saw a lot of really big all encompassing libraries for a while React, Vue before that angular, which is just a massive beast at this point, we’ve started to see smaller libraries that do the same thing come into their own. So for example, I think the king of this hill is probably Preact, which is a three kilobyte alternative to React that uses the same API, ships way less code and actually runs orders of magnitude faster on safe updates than React does too. So you’ve got things like that.

Chris: For a while you had... Well, it’s still out there, but Alpine JS, which was inspired by VJS and then actually inspired Evan You who built Vue to release Petite Vue, which is a 5.5 kilobyte subset of Vue that’s optimized around progressive enhancement. So these are still client-side libraries, but the intent behind them is that they ship less code, include fewer abstractions and ultimately work faster and put less of that cost on the front end user. So that’s been one angle.

Chris: And then the second trend I’ve seen that I think is personally more compelling is a shift from libraries to compilers. And so the one that kicked this whole trend off was felt by Rich Harris, which takes the idea of state based reactivity. But instead of having this be a thing that runs in real time in the client, you author your code with the same general pattern that you might with React or Vue, and then you run a build tool that compiles all that into plain old HTML and vanilla JavaScript, and that’s what gets shipped to the browser. And so you’ve stripped out almost all of the abstractions in the client and you deliver something that’s way closer to what you might hand write with old school DOM manipulation, but with the developer convenience of state based EI. So that was really interesting.

Chris: More recently there’s a new tool called ASTRO that builds on what Rich did with Svelt, and also allows you to pull in components from your favorite libraries so you can mix and match Vue, React, FELT, Vanilla, JavaScript, all in one package, compile it all out into Vanilla JavaScript and ship orders of magnitude, less code without the abstractions. And so it would run way faster in the browser as well. And those are, I think for me, really the two big things that are like standing on the shoulders of giants and producing a front end that will hopefully start to be a little bit faster. The compilers in particular are interesting because they take us away from rendering HTML in the browser as much as possible. You still render your HTML or you still author it with JavaScript if you want, but the outputted result is more static HTML and less JavaScript, which is always a good thing.

Drew: Do you think this is the ecosystem’s response to this quiet developer dissatisfaction about the weight of modern frameworks? Is it just a natural heave and ho?

Chris: Yeah, it is. Although to be honest, I’m not entirely sure how much of this was driven by... Well, there’s some definitely some performance minded developers out there who have been very vocal about how these tools are bad for the user. I don’t know that that’s necessarily representative of the general population though. I mean, certainly a subset of it given how the last time we talked that episode did, but I think one of the things that none of these tools for me get at is... Or the thing that I’m most bothered by by the modern web that I don’t think these tools address is that I personally feel like just the development process in general is over complicated.

Chris: This is where I get into the whole like, I don’t think the developer experience is actually better with these tools, but I think for a lot of developers in maybe a team environment, it can be. For me as a largely solo developer, I find these tools more trouble than they’re worth, but I know a lot of folks disagree with me there, so I don’t want to dismiss that as invalid. If you find these tools useful great, but yeah, I think this is maybe a natural pendulum swing back in the other direction.

Chris: The third thing that I didn’t talk about that your question actually makes me think about though is, there is almost a natural cycle in the web where you start to throw a lot of JavaScript at solving problems as the web and the capabilities of it grow. And eventually those JavaScript libraries get absorbed by the platform itself, but it’s a much slower process than creating a new JavaScript library is, because it’s standard processes and how important those are. So we saw the same thing happen with jQuery, right, where the amount of JavaScript being used on the web swelled with jQuery and jQuery plugins.

Chris: And then eventually the web platform realized that these ways of doing things are really smart and we started to get native ways to do it. And then there was this really long, slow petering off of the shift away from jQuery. So I think these libraries, as much as they’ve done a lot of... That’ll be a little controversial here and say, they’ve done a lot of damage to the web. They’ve also served an important function in paving cow path for what native APIs could look like it could do. So I don’t want to completely dismiss them as terrible.

Drew: It’s interesting that you mentioned ASTRO just a little bit earlier. I’ve actually recorded an interview with Matthew Phillips. I’m not sure if it goes out before or after this one. He’s one of the core developers on ASTRO. And it certainly is a very creative and an interesting approach to the problem. I do wonder as you saying how much this is. We’ve created a set of problems for ourselves and so now we’ve created a new solution, which patches over those problems and gives us something even better. But are we just stacking the bricks on top of each other and still ending up with a very wobbly tower because of it? Are we just going down the wrong path?

Chris: It depends. So I, as the hair on my head has started to disappear and my beard has gotten whiter, I’ve started to talk in fewer absolutes than I did. And so five years ago, I would’ve said, "Absolutely yes." I don’t want to diminish the value of these tools in a team environment. And the other thing, I honestly think a lot of libraries really have the potential to at least patch fix in the interim is accessibility problems with the web around complex UI components. So in short, if I were to give this just a one sentence, yes, I do think in many ways we’re creating a really delicate house of cards that collapses very easily. And I think one of the nicest things about using mostly or almost entirely platform native to build for the web, so just authoring an HTML, CSS and JavaScript is, you cannot touch that code for five years and come back to it and you don’t have any dependencies to update. You don’t have any build tools to run, to start working with it again. It just works. And that’s really great.

Chris: But I think the thing I see with libraries is a lot of them come into creation to fill gaps in what the platform can do. And what I’ve noticed happens is after the platform catches up, the libraries stick around for a really long time. And so the thing I always try to do is be a little bit deliberate about what I add to the things I build, because it’s really easy to add stuff and really hard to take it away once it’s there. And just, I think to ground these heady abstract concepts, I’m talking about for a sec, every year, web aim, Web Accessibility consultancy firm does a survey of the top million sites on the web. And they run an audit, just automated audits. They’re not doing a detailed inspection of all these sites. So just stuff that, simple like robot tasks and pickup. And historically, one of the things they’ve always found is that sites that use UI rendering libraries have more accessibility issues than sites that don’t.

Chris: This year they found the same trend with one exception. Sites that use React actually have fewer accessibility issues than sites that don’t. And that is a notable trend or noticeable departure, rather from the year before when React sites had more accessibility issues.

Chris: I noticed a lot of focus on accessibility in the React community over the last year, building more accessible components, accessible routing, things of that nature. And for complex components, things like tabs and disclosure widgets, and sliders and things like that, it is really hard to do those accessibly with just HTML and Vanilla JavaScript. Trying to keep track of which ARIA attributes you need to add on, which elements and how to change them based on different behaviors and how to shift focus and announce different things is really complex. And I think these libraries as much as they can be a very delicate house of cards, I see a huge potential there to fill these gaps. Where I’d ultimately love to end up though, is in a place where the platform, the web, browsers offer native components that do those things so that you don’t need the libraries. And I think the details and summary elements provide a really nice model for what that could look like.

Chris: So if you’re listening to this and you don’t know what those are, the details element is an HTML element that you wrap around some content, and then inside it you nest a summary element with like a little description of what’s in that content. And by default, this element will be a collapsed bit of content. And when you click on the stuff in the summary, it expands and then when you click it again, it collapses and it shows a little arrow when it’s open or closed to indicate what’s happening here. It’s accessible out of the box. If the browser doesn’t support it, it shows all the content by default. So it’s just automatically progressively enhanced. You don’t need to do anything special there.

Chris: It can be styled with CSS. You can even change what the icons that display when it’s expanded and collapsed are, just with CSS. You don’t need to write any JS for it, but if you wanted to extend the behavior in some way you can, because it also exposes a JavaScript event that fires whenever it’s toggled open or closed. And I would love to see more stuff like that for tabs, for image sliders or carousels or photo galleries, which just... We have so many different interactive components now on the web that may or may not always be appropriate, but they’re in the designs and people are building them and having a way to do those things where you didn’t have to fumble through how to make them accessible or lean on a 30 kilobyte library would be awesome.

Chris: And so for me, that’s, I think, the next evolution of the web. That’s where I really want to see things start to go. And I think that’s the big need that these libraries address today in addition to some other stuff like changing the UI based on state changes and interesting use cases like that.

Drew: Yeah. Modern browsers are just so capable now and they automatically update themselves and they include many of the features natively that we’ve previously relied on, on big frameworks and build tools for. Is the requirement of a build process to deploy a project a red flag in 2021? Should HTML and CSS and JS just be deployable as it is?

Chris: So technically they are. I don’t think for most build processes that’s real or for most apps or sites or companies that’s necessarily realistic today. I don’t know that I’d call it a red flag as much as a resigned I wish it wasn’t like this, but I understand why it is, for me. Even for myself, my site has several thousand pages on it now. I think I’m up to three or four thousand pages and there’s no way I am just hand coding all those. I use a static site generator and I think tools like that can be really great.

Chris: I think there’s some challenge there in that they become things that have to be kept updated and maintained. And so I like to keep mine as lean as possible, but I think build tools that put more of the run time on you, the developer, and thus allow you to ship less to the browser are a good thing, especially as the things we build become more complex. So I don’t know that I would necessarily say it’s just by default a red flag. I think a lot of it depends on how you’re using it. If you need to run a build to ship a one or two page marketing site or brochure site, yeah, that’s a red flag. But if you’re building some complex applications and these allow you to author in a way that’s more sensical for you and then ship less stuff to the browser, that’s not a bad thing. And that’s why I find tools like ASTRO really, really interesting because there is still a build step there, but it’s a build step in the service of providing a better end user experience.

Drew: Yes. It’s shifting all that computation onto the server to build time or pre deferred time and not on page request time.

Chris: Yeah. And so for me, I almost break build steps into... Like for me, the gold standard is if I can ship it without any build step at all, that’s awesome. But even for myself, the vanilla jazz guy, that’s not how I do things a hundred percent today. And so I think the next step up is compilers that reduce your code to as much HTML and plain old JavaScript as possible, versus those that create even more JavaScript, like the ones that take a bunch of little files and make an even bigger file. So more of the former, less of the latter if possible is always a good thing, but not always possible.

Drew: I think getting off the dependency treadmill, as it were, it’s got to be a big draw to a Vanilla JavaScript approach, not having a million dependencies to be updating all the time, but I guess one of the advantages to some of these bigger frameworks is that they sometimes dictate and sometimes facilitate a uniform way of working, which is really important with larger teams. Is there a danger of a project going a bit off the rails without those standards and procedures in place that a framework imposes?

Chris: Yes. Yeah. I think that’s fair. I used to downplay, I think, the significance of this for a while. And I think that is valid. That is a fair benefit of these tools. I think that maybe the small counter argument here is if you Google, "How to do X with React," you’re going to get half a dozen different approaches to doing that thing. So there are conventions, but there’s not necessarily hard and fast, like if you don’t do it this way, everything breaks kind of rules. One of the appeals of these tools is that they have a lot of flexibility. Certainly they do enforce more standard approaches though than just green fields, browser native things do. And so I think there’s maybe a bit of a balance, even if you don’t have a strong team lead who’s driving internal code standards.

Chris: I have seen even framework based projects go off the rails with hodgepodge approaches before. So it’s not like these tools automatically give you that, but they definitely give you some guidelines, maybe some rails that nudge you in the right direction. And I know some people need that. If that is something you need, this is where I really like that we’re seeing more of these smaller libraries that use the same conventions, like Petite Vue or Preact and compilers that also... Like FELT has some very rigid rails around it, certainly more so than you would see with ASTRO and so if you really need that, I think you have some options that don’t punish users for that need as much as what we had been doing a few years ago.

Drew: In the work that I do, we use Vue and the Vue single file components are a really compelling case for this in that we have engineers writing front-end code, who aren’t necessarily front-end specialists who say here’s a way to create a skeleton single file component. Your template goes here, your Java script goes here, your CSS goes here. And just naturally as a result of that, we end up with a very consistent code base, even though it’s been created by a very diverse set of people. And so the conventions like that can really have a big benefit to teams who aren’t necessarily all headed in the same direction because the engineering department’s so massive or whatever.

Chris: Yeah, for sure. Where I think you sometimes get into trouble with that... And I agree. I absolutely like the ability to make a code base look consistent with a bunch of different people working on it is really, really important because the people writing the code today are not necessarily going to be the ones maintaining it later. And that can get messy fast. The flip side is, if you are someone who is not comfortable or really well versed in JavaScript, a lot of the modern tool set is really geared towards JavaScript. And there are a lot of people on teams who specialize primarily in HTML or CSS or accessibility. And for them, JavaScript is not a core competency nor do I think it’s fair to expect it to be. And just like you don’t expect all your JavaScript developers to be experts in CSS.

Chris: And so it can make their job a lot harder. And this is for me, always that like that give and take with these tools is they can do some really awesome things, but they can also gate keep a lot of people out of the process. And I feel like that balance is different from team to team, but for me, one of the big arguments for leaning more on browser native stuff, or ditching as many of those dependencies as possible is that you open up your development process to a lot of people who are not as JavaScript heavy.

Drew: There’s always this undercurrent within the industry that suggests there’s the current way of doing things, the latest and there’s the outdated way. And if you’re not up to date with whatever the latest is, you’re somehow not as good an engineer or whatever.

Drew: In your estimation does taking a Vanilla JavaScript approach enable you to swim free of all that is Vanilla JS like an evergreen approach that stands apart from those techniques.

Chris: Yeah. Yeah. There’s a few threads in what you just mentioned, Drew. So one of them is, if you understand the fundamentals of the web, I have found that it’s a lot easier to like a bee, just bounce from different technology to different technology and understand it enough to like... Even if you don’t use it, look at it and be like, "Okay, I can see some benefits to this or not, and evaluate whether it’s the right choice." If you need to dive into a new technology based on client needs or shifting direction in the company, you can. I think it’s a lot harder to do that if you only know a library and you’ve only learned the web in the context of that library.

Chris: Now the caveat here is, I learned JavaScript in the context of jQuery and then backed my way into Vanilla JavaScript, and then moved on to a bunch of other things too. The more I think about how that process went for me though, I think I was able to do that as easily as I did in large part because by the time I made that jump, ES5 had come out and had taken a bunch of its conventions from jQuery. And so there was a lot of these real one for one map. Mental map things I could do. I don’t know if we’re quite there yet with some of the state based UI libraries, but we’re definitely headed in that direction and I think that’s great. But the other thing here, there is this real pressure, as you mentioned in the industry to always keep up to date with all these new technologies, in large part because people who develop these technologies and people who work at the big companies are the ones who get invited to speak at conferences and talk about all the cool things they’ve built.

Chris: But the reality is that a lot of our web, like I’d say a majority of our web, runs on boring old technology that hasn’t been updated in a while, or has been updated, but in just a patch fix process. A lot of really important applications run on Python or PHP, or as a backend with just some sprinkling of lightweight HTML, CSS, and JavaScript on top. jQuery is still used on a lot of important stuff to the exclusion of other libraries. And it doesn’t always feel like it because I feel like most job descriptions that you see talk about wanting experience in React or Vue or something these days. But my experience from working in bigger technology companies or older product companies, is that there are a lot of jobs to be found working on old stable technology. And a lot of times it’s not always the most exciting work, but a lot of times they’re jobs that pay well and have really great hours and a lot of work life balance in a way that you won’t get in a really exciting tech company working on the latest stuff.

Chris: And so there’s these trade offs there. It’s not always a bad thing. Yeah, I think it’s one of those, like the new, new, new thing is potentially a very vocal minority of the web that’s not representative of as the web as a whole.

Drew: And there seems to be along with this idea that you should be adopting everything new and immediately casting away everything that you’ve been using for the last 12 months. There’s also this idea that you should be engineering things that are enterprise grade of engineering, but you ought to be doing every small project the way that an enormous company with 400 engineers is building things. And those two ideas actually aren’t compatible at all. It’s the big companies with all these hundreds of engineers who are using the old crusty technology, because it’s reliable and they’ve got far too much momentum. They hate to be dropping it and picking up something new. So those two ideas are indirect conflict, I think.

Chris: Yeah. It’s funny. You always see the whole like, "Well, will it scale, will it scale," kind of thing all the time. And does it need to? Are you building things for a Facebook sized audience? I’m sure you’ll get there at... Well, you’ll get there, but it would be wonderful if you got there at some point, but like, if you’re not there today, maybe that’s not necessarily how you need to start out. Like those aren’t your needs today. You’re pre-engineering for a problem that you don’t have to the detriment of some problems that you do.

Chris: I think the other thing here is there’s this presumption that because Facebook or Google or Twitter do things, it’s a good idea, or it’s a good idea for everybody. And that’s not necessarily the case. Those companies do a lot of things right. But they also do a lot of things poorly and they do them that way because of engineering trade offs they’ve had to make because of how their teams are structured or very specific internal problems they had at the time that they made this decision or because some executive somewhere felt really strongly about something and made the internal team do it, even though it wasn’t necessarily best at the time. And then these things stick around. And so, yeah, I think one of the biggest things I see happen in our industry to our own detriment is looking at those few really big visible technology companies and thinking, "If they do it this way, I have to too," or "That’s the right call for everybody."

Chris: It’s that old, like no one got fired for hiring IBM kind of thing, but applied to if it’s good enough for Google or if it’s good enough for Twitter or whatever, so yeah. I agree. I think we do a lot of that and maybe that we shouldn’t.

Drew: I asked on Twitter earlier on that what frustrated people about modern web development best practices and from the responses I got, there’s certainly a lot of dissatisfaction with the current state of things. One trend, which over the last few years is getting momentum is like the Jamstack approach to building sites. And it seems on the surface that this going back to just client-side apps and nothing complex on the server, it sounds like it’s going back to basics, but is it doing that? Is it or is it just masking the complexity of the stack in a different way?

Chris: It depends. I’m a little biased here because I love the Jamstack personally, but I have also seen... Well, I shouldn’t say I have seen. I think what I’m trying to say here is the Jamstack is a term that can apply to a wide range of approaches up to and including a really large two megabyte of JavaScript single page app that has no server side rendering on one end. And then on the other end, flat HTML files that use absolutely no JavaScript at all, and instantly loading your browser and just happen to be shipped from a CDN or something like that. And technically speaking, both of those are Jamstack and are not the flat HTML thing. So Jamstack is not inherently better than server rendered, but in many cases it can be.

Chris: So for those of you who don’t know, Jamstack used to be an acronym that stood for JavaScript, APIs and markup, and they’ve since changed the spelling and changed the definition a little bit there. And it really encompasses an approach to building the web that doesn’t rely on server side rendering. So anything you’re serving, you’ve already compiled and put together and that’s what ships in the browser. And if there’s any other processing or scripting that happens, that happens in the client. Doesn’t have to, but often does. And so what I think is awesome about Jamstack if done a certain way, is it can dramatically improve the performance of the things that you’re building.

Chris: If you’re not just shipping like a metric ton of JavaScript to the client and having all the stuff that you used to do on the server happen in the browser instead, because the browser will always be less efficient at all that scripting than the server would be, but where this really comes to shine, and so I’ll use like WordPress as an example. I love WordPress. I built my career on WordPress. It’s the reason why I was able to become a developer, but every time someone visits a WordPress site out of the box, it has to make a call to a database, grabs some content, mash it into some templates, generate HTML and ship that back to the browser.

Chris: And there are some plugins you can use to do some of that ahead of time, but it is a very slow process, especially on a shared inexpensive web host. A Jamstack approach would be to have that HTML file already built, and you cut... You don’t cut the server out, but you cut all of that server processing completely out. So the HTML file already exists and gets shipped. And in an ideal world, you would even push that out to a bunch of CDNs so it sits as close to the person accessing it as possible. And what that can do is take a load time from a couple of seconds on an inexpensive host to less than half a second, because of how little computing time it takes to actually just request a file, get the file back and load it, if it’s mostly HTML.

Chris: And so, yeah, I really like rambling in long winded response to your question, Drew, but I think the answer is, if you’re using it with something like a static site generator, it can be amazingly more performant than some of the other things we’ve done in the past. And it allows you to get that same WordPress experience where I’m authoring content and I have some templates and I don’t have to hardcode HTML, but the performance is way better on one end.

Chris: And then on the other end, you could theoretically define a React app as Jamstack as well and it can be really slow and buggy and terrible. And so it depends. The other thing I’m seeing happen that’s really, really funny and interesting is we just keep reinventing PHP over and over and over again as an industry in various ways. So-

Drew: We still have PHP as well. It’s not gone.

Chris: Right? And yet PHP still exists and still works great. And so we’ve got... Like I remember when Next.js came out. There was all these kind of, "And here’s all the things you can do with it." And I was like, "Oh, that’s like PHP," but a decade later. And then my friend Zach Leatherman who built Eleventy which is an amazing static site generator has been experimenting with some compiling in real time on the server stuff with Eleventy.

Chris: So it’s like just in time Jamstack and he even jokes that he’s essentially recreated PHP in node and JavaScript, but it’s slightly different because there’s like a serverless build that happens that then instant deploys it to a CDN and it’s like a little weird. So it’s still a house of cards. You’re just shifting around where those cards live and who’s responsible for them, but yeah, yeah. Jamstack is cool. Jamstack is problematic. It’s also not. It’s awesome. It’s potentially overused both as a term and a technology. Yeah. It’s a whole lot of things and I love it in the same way that I love PHP. It’s great and it has problems and every technology and approach is a series of trade-offs.

Drew: Do you think we’re going through some industrial revolution in web development? What used to be skilled painstaking work from individual arts and is now high volume, high production factory output. All the machines have been brought in and the frameworks and the build tools and have we lost that hand rolled touch?

Chris: Well, I mean, yes, to an extent, but we don’t have to. I mean, that analogy is appropriate in many ways, because a lot of the ways we do things today produce... I like to call them front end pollution in the over-reliance on JavaScript, but also in the very literal sense. We have so many heavy build processes now that they generate more actual literal pollution as well. But I think the counter argument here is with a... I will use farming, right? You could go out and hand mill your wheat with a scyther. I forget what you call those. The crescent shaped tool that you use to chop your wheat, or you could use an oxen drawn machine that will pull that off, or you can use a big tractor.

Chris: And I think there’s a clear argument that at some point, factory farming is this big industrial complex that has lost a little bit of that close to the Earth touch, but I don’t think I necessarily need my farmers to be hand chopping their wheat. That is wildly inefficient for very little benefit. And there’s probably a balance there. And I feel the same thing with what we’re doing here. Some of these tools allow us to do more artisan work faster and more efficiently. And sometimes they just turn it into generating a bunch of garbage and turn it out as fast as possible. And there’s not necessarily a clear cut delineation for where that crossover happens. I think it’s a little fuzzy and gray and like a you know it when you see it kind of thing. Sometimes not always. But yeah, I think it’s a little bit of both. The commercialization of the web is both a really terrible thing and also a really great thing that has allowed folks like myself to have a living working on the platform that I love full time.

Chris: That’s awesome. But it’s also produced a lot of problems and I think that’s true for any technology. There’s good and bad that comes with all of it.

Drew: And maybe sometimes we’re just producing really fat pigs.

Chris: Yeah. I’ve gotten a lot more like, it depends as I’ve gotten older. This stuff used to really, really upset me from a purist standpoint. And I still really hate the fact that we’ve forced our users to endorse such a fragile and easily broken web. The web in general has gotten four to five times faster in the last decade. And the average website has only gotten a hundred milliseconds faster in terms of load time, because we just keep throwing more and more stuff at our users. And we could have a really fast resilient web right now if we wanted one, but we don’t. And part of that is a natural trade off for pushing the capabilities of the web further and further and that’s awesome, but I feel like sometimes we do things just because it’s shiny and new and not because it adds real benefit to folks. So I’d love to see a little bit more balance there.

Drew: Is part of the problem that we’re expecting the web to do too much? I mean, for many years we didn’t really have any great alternatives. So we enhanced and maybe over-stretched the high tech document system to behave like a software application. And now we’ve all got really powerful phones in our pockets, running a range of network connected apps. Is that the appropriate outlook for this functionality that we’re trying to build into websites? Should we just all be building apps for that case and leaving the document based stuff to be on the web?

Chris: I would argue the other direction. I think the bigger problem is... So maybe there are certain things for which I even personally I prefer like a native app over something in the web. But I think having the web do more frees you from app ecosystems and allows you as a team to build a thing and be able to reach more people with it, not have to download an app before you can access the thing you want. That’s a really cool thing. And I would argue that potentially the bigger problem is that browsers can’t keep up with the pace of the thing that we want the web to do. And that’s not a knock on the people behind the standards processes. I would not want to go back to every browser just does their own thing and the hell with it. That was awful to develop for.

Drew: It was, yeah.

Chris: We do have some of those similar problems though, just based on how the standards process works. So sometimes you’ll see Google get frustrated because they have so much in-house development power, get frustrated with other browsers that are part of that process not wanting to go along with something or not moving fast enough. And so they just... Leeroy Jenkins it and just run off and go do whatever they want to do. On the flip side you sometimes see Apple moving very, very slow because they don’t put as much investment into the web as they do other parts of their business, which is hopefully, maybe starting to change a little bit with some of the more recent hires they’ve made. But I think one of the things you run into is just the web tends to move a little slowly sometimes.

Chris: Technology moves fast, but the browsers themselves and the technologies they implement don’t always keep up. And so I don’t believe we demand too much of our browsers. I just think you get this natural ebb and flow where we demand a lot. We build a bunch of libraries to polyfill the things that we want and then when the browser eventually catches up, there’s this really slow, petering off as library usage for that particular stuff drops off.

Chris: Yeah. But I don’t know that I would say we demand too much of the web. Yeah, I don’t know. I actually, I love all the things the web can do. I think it’s really, for me, it’s what’s so exciting about the web. I think my frustration is more just with how slow some of these technologies are to come out, particularly on iOS devices. And I say this as someone who, I love my iPhone, but progressive web apps continue to be a second... They just don’t get as much priority as native apps do on that platform, which is disappointing.

Drew: So looking to the future on that note, what should we, as a development community be working on to fix some of these issues? Where should we be placing our efforts?

Chris: Yeah. So I think there are a few different things. And I think some of the tools we’ve talked about, I don’t think they’ll ever necessarily go away. They might change in form a little bit, but so I already see some cool things on the horizon. One of the things people love about single page apps that we’ve never been able to do with, I call them multistage apps, but they’re really just plain old webpages is like the nice transitions that happen between views where you might like fade in, fade out, or something like that.

Chris: There’s a native API in the works that’s going to make that a lot easier. That’s awesome. There’s also a native API in the works for HTML sanitization. So one of the big things that libraries do for you is they, when you’re rendering HTML from third party data, they have some libraries baked in that will help reduce your risk of cross-site scripting attacks.

Chris: There’s not really a good, just native way to do that, but there’s one in the works that will make that a lot easier. And even if you continue to use state based libraries, that should allow them to strip a bunch of code out and that would be an awesome thing.

Chris: One thing that the native web can’t do yet that would be really cool, is a way to handle DOM dipping so that if you want to build some HTML based on a JavaScript object and then update it as the object changes, it would be really cool if you didn’t have to rely on a library for that. And that there was maybe a performant out of the box way to do that in the browser. I think that would solve a lot of problems. As well as more accessible interactive components. I absolutely love when HTML and CSS can replace something I used to need JavaScript for. Doesn’t need to be as rigorously tested, way more fault tolerant, less likely to break, more performant all around. It’s a net win. And so I’d love to see more of those come to the platform.

Chris: So from a browser native thing there’s that. And then the other big thing I think we’re going to start to see more of is a shift away from client-side libraries and a shift to more pre-compiled stuff. Whether that’s static side generators, something like ASTRO that still uses JavaScript libraries, but pre renders them instead of making the browser do it. But those are the, I think, the big things I’m seeing start to happen and I think we’re going to see more and more of.

Drew: So you saying maybe it’s not all doom and gloom and perhaps we can fix this? There’s a way out?

Chris: No, yeah, I see us emerging from the dark ages slowly. And what I think is going to happen is we’re going to hit a point where much like where today people are like, "Why does everybody still use React?" I can imagine in 7 to 10 years time, we’re going to be like, "Why does anybody..." I’m sorry. Not React. jQuery. "Why does everybody still use jQuery?" We’re going to see the same thing with React and Vue. Like, "Why does everybody still start a project with those?" And there’s going to be some new libraries that are starting to emerge to solve a whole new set of problems that we haven’t even dreamed of today.

Drew: One comment from Twitter that I really identified with was from Amy Pellegrini, who said, "Every time I update something, everything gets broken." Yep. I just think we’ve all been there, haven’t we?

Chris: Yeah. I unfortunately don’t think that will ever fully go away because even in the non-build tool era of jQuery, we used to just load it with a script element. You would run into situations where different versions would be incompatible with each other. And so you’d drop a plug in into your site and it would be like, "Sorry, you’re running jQuery 1.83, and this requires 1.89 or higher because it added this new..." And so there’s always been some version of that. I think it’s a lot more pronounced now because so much of it happens in the command line and spits out all these terrible errors that don’t make sense. But yeah, that unfortunately I don’t think will ever go away. I feel the pain though. That one, it’s a big part of the reason why I try and use as few dependencies as possible.

Drew: Me too. Me too. So I’ve been learning all about the lean web or learning more about the lean web than our conversation. What have you been learning about lately, Chris?

Chris: Yeah. Great question. So I have been going deep on service workers in part because I love their ability to both make the web faster, or even if you’re not building a progressive web app, they’re just really, really cool. One of the things I’ve absolutely loved them for though is they allow me to build a single page app like experience in terms of performance, without all the complexity of having to handle JavaScript routing and stuff. So I can have a multipage app, cache my API calls for a short period of time without having to cache them in memory. And so I’ve been able to do some really cool things with them. And then the other thing I’ve been learning a lot about lately is serverless, which allows me to get the benefits of having some server-side code without having to actually manage a server, which is great.

Chris: And so I went really, really deep on those, put together a couple of courses on both of those topics as well, but they have benefited me immensely in my own work, in particular service workers, which has been amazing. I’m obsessed with them. Recommend them for everybody.

Drew: That’s amazing. And where can people find those courses that you put together on?

Chris: So if you go to vanillajsguides.com, you can dig into those and a whole bunch of other courses as well.

Drew: Amazing. If you dear listener would like to hear more from Chris, you can find his book on the web at leanweb.dev and his developer tips newsletter, which I believe now gets over 12,000 subscribers-

Chris: Yeah. Up a little bit from the last time we chatted.

Drew: Yeah. That’s at gomakethings.com. Chris is on twitter @chrisferdinandi. And you can check out his podcast at podcast.com or wherever you usually get your podcasts from. Thanks for joining us today, Chris. Do you have any parting words?

Chris: No, that was a really great summary, Drew. Thank you so much. You hit all the key links. So thanks so much for having me. This was great.

1 Dec 2021 | 3:00 am

It’s That Time Of The Year (December 2021 Desktop Wallpapers Edition)

Slowly but surely, 2021 is coming to an end. And, well, could there be a better way to celebrate the last few weeks of the year than with some cheerful desktop wallpapers? To get you in the right mood for December — and the holiday season, of course — artists and designers from across the globe once again got their ideas bubbling and created festive and inspiring wallpaper designs for you.

All wallpapers in this post are created with love and come in versions with and without a calendar for December 2021 — so no matter if you want to count down the days to a deadline (or to Christmas morning, maybe?) or prefer to enjoy your new wallpaper without any distractions, we’ve got you covered. And if you’re up for some extra holiday cheer, you’ll also find some favorites from our wallpapers archives compiled at the end of this post. Happy December!

  • You can click on every image to see a larger preview,
  • We respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience through their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us but rather designed from scratch by the artists themselves.

Submit a wallpaper!

Did you know that you could get featured in our next wallpapers post, too? We are always looking for creative talent! Join in! →

Winter Holidays

“Enjoy the Christmas and New Year holidays with your loved ones!” — Designed by LibraFire from Serbia.

King Of Pop

Designed by Ricardo Gimenes from Sweden.

On To The Next One

“Endings intertwined with new beginnings, challenges we rose to and the ones we weren’t up to, dreams fulfilled and opportunities missed. The year we say goodbye to leaves a bitter-sweet taste, but we’re thankful for the lessons, friendships, and experiences it gave us. We look forward to seeing what the new year has in store, but, whatever comes, we will welcome it with a smile, vigor, and zeal.” — Designed by PopArt Studio from Serbia.

Seville Has A Special Color

“The year is over and we celebrated it in Seville! Happy Christmas and a happy new year!” — Designed by Veronica Valenzuela from Spain.

Anonymoose

Designed by Ricardo Gimenes from Sweden.

Catchiest Christmas Song Ever

Designed by ActiveCollab from the United States.

December In Sunny So Cal

“Oh dear! I mustn’t forget my parasol.” — Designed by Reena Ngauv from Los Angeles.

Oldies But Goodies

The frosty winter weather, the joy of a glass of eggnog by the Christmas tree, or, well, Bathtub Party Day — these are just a few of the things that inspired the community to design a December wallpaper in the more than ten years that we’ve been running this monthly series. Below you’ll find a selection of December favorites from our wallpapers archives. Please note that these designs don’t come with a calendar.

Dear Moon, Merry Christmas

“Please visit Vladstudio website if you like my works!” — Designed by Vlad Gerasimov from Russia.

Winter Landscape

Designed by Morgane Van Achter from Belgium.

Happy Holidays

Designed by Ricardo Gimenes from Sweden.

Getting Hygge

“There’s no more special time for a fire than in the winter. Cozy blankets, warm beverages, and good company can make all the difference when the sun goes down. We’re all looking forward to generating some hygge this winter, so snuggle up and make some memories.” — Designed by The Hannon Group from Washington D.C.

It’s Christmas

“The holiday season is finally here, which means it’s time to deck the halls, bring out the figgy pudding and embrace all things merry and bright. It’s Christmas!” — Designed by Divya (DimpuSuchi) from Malaysia.

Cardinals In Snowfall

“During Christmas season, in the cold, colorless days of winter, Cardinal birds are seen as symbols of faith and warmth! In the part of America I live in, there is snowfall every December. While the snow is falling, I can see gorgeous Cardinals flying in and out of my patio. The intriguing color palette of the bright red of the Cardinals, the white of the flurries and the brown/black of dry twigs and fallen leaves on the snow-laden ground fascinates me a lot, and inspired me to create this quaint and sweet, hand-illustrated surface pattern design as I wait for the snowfall in my town!” — Designed by Gyaneshwari Dave from the United States.

The House On The River Drina

“Since we often yearn for a peaceful and quiet place to work, we have found inspiration in the famous house on the River Drina in Bajina Bašta, Serbia. Wouldn’t it be great being in nature, away from the civilization, swaying in the wind and listening to the waves of the river smashing your house, having no neighbors to bother you? Not sure about the Internet, though…” — Designed by PopArt Studio from Serbia.

Joy To The World

“Joy to the world, all the boys and girls now, joy to the fishes in the deep blue sea, joy to you and me.” — Designed by Morgan Newnham from Boulder, Colorado.

Christmas Time!

Designed by Sofie Keirsmaekers from Belgium.

Ice Flowers

“I took some photos during a very frosty and cold week before Christmas.” Designed by Anca Varsandan from Romania.

Sweet Snowy Tenderness

“You know that warm feeling when you get to spend cold winter days in a snug, homey, relaxed atmosphere? Oh, yes, we love it, too! It is the sentiment we set our hearts on for the holiday season, and this sweet snowy tenderness is for all of us who adore watching the snowfall from our windows. Isn’t it romantic?” — Designed by PopArt Studio from Serbia.

Christmas All Around The Globe

“Christmas is celebrated all around the globe — in the winter as well as the summer. From north to south, east to west: Merry Christmas everyone!” — Designed by Ricardo Gimenes from Sweden.

Tongue Stuck On Lamppost

Designed by Josh Cleland from the United States.

All That Belongs To The Past

“Sometimes new beginnings make us revisit our favorite places or people from the past. We don’t visit them often because they remind us of the past but enjoy the brief reunion. Cheers to new beginnings in the new year!” Designed by Dorvan Davoudi from Canada.

Enchanted Blizzard

“A seemingly forgotten world under the shade of winter glaze hides a moment where architecture meets fashion and change encounters steadiness.” — Designed by Ana Masnikosa from Belgrade, Serbia.

’Tis The Season To Be Happy

Designed by Tazi from Australia.

Christmas Woodland

Designed by Mel Armstrong from Australia.

’Tis The Season (To Drink Eggnog)

“There’s nothing better than a tall glass of Golden Eggnog while sitting by the Christmas tree. Let’s celebrate the only time of year this nectar of the gods graces our lips.” — Designed by Jonathan Shears from Connecticut, USA.

Gifts Lover

Designed by Elise Vanoorbeek from Belgium.

Bathtub Party Day

“December 5th is also known as Bathtub Party Day, which is why I wanted to visualize what celebrating this day could look like.” — Designed by Jonas Vanhamme from Belgium.

Christmas Cookies

“Christmas is coming and a great way to share our love is by baking cookies.” — Designed by Maria Keller from Mexico.

Robin Bird

“I have chosen this little bird in honor of my grandfather, who passed away. He was fascinated by nature, especially birds. Because of him, I also have a fascination with birds. When I think of winter, I think of the birds, flying around searching for food. And why a robin? Because it is a cute little bird, who is also very recognizable.” — Designed by Engin Seline from Belgium.

’Tis The Season Of Snow

“The tiny flakes of snow have just begun to shower and we know it’s the start of the merry hour! Someone is all set to cram his sleigh with boxes of love as kids wait for their dear Santa to show up! Rightly said, ’tis the season of snow, surprise and lots and lots of fun! Merry Christmas!” — Designed by Sweans Technologies from London.

Happy Birthday Rudyard!

“December 30th is the birthday of Rudyard Kipling, the writer of the Jungle Book. To celebrate, I decided to create a very festive jungle scene with some of the characters from the story.” — Designed by Safia Begum from the United Kingdom.

Christmas Owl

“Christmas waves a magic wand over this world, and behold, everything is softer and more beautiful.” — Designed by Suman Sil from India.

Trailer Santa

“A mid-century modern Christmas scene outside the norm of snowflakes and winter landscapes.” Designed by Houndstooth from the United States.

The Matterhorn

“Christmas is always such a magical time of year so we created this wallpaper to blend the majestry of the mountains with a little bit of magic.” — Designed by Dominic Leonard from the United Kingdom.

Christmas Selfie

“In this year of selfies, I’ve imagined Santa Claus doing the same.” — Designed by Emanuela Carta from Italy.

Christmas With The Digies

“Merry Christmas from The Digies at digitalprofile.io.” — Designed by Rachel Sulek from Wales.

Christmas

“A simple wallpaper for the cold month December. Nothing more, nothing less.” — Designed by Frédéric Hermans from Belgium.

Christmas Mood

Designed by MasterBundles from the United States.

30 Nov 2021 | 10:45 pm

3D CSS Flippy Snaps With React And GreenSock

Naming things is hard, right? Well, “Flippy Snaps” was the best thing I could come up with. 😂 I saw an effect like this on TV one evening and made a note to myself to make something similar.

Although this isn’t something I’d look to drop on a website any time soon, it’s a neat little challenge to make. It fits in with my whole stance on “Playfulness in Code” to learn. Anyway, a few days later, I sat down at the keyboard, and a couple of hours later I had this:

3D CSS Flippy Snaps ✨

Tap to flip for another image 👇

⚒️ @reactjs && @greensock
👉 https://t.co/Na14z40tHE via @CodePen pic.twitter.com/nz6pdQGpmd

— Jhey 🐻🛠️✨ (@jh3yy) November 8, 2021

My final demo is a React app, but we don’t need to dig into using React to explain the mechanics of making this work. We will create the React app once we’ve established how to make things work.

Note: Before we get started. It’s worth noting that the performance of this demo is affected by the grid size and the demos are best viewed in Chromium-based browsers.

Let’s start by creating a grid. Let’s say we want a 10 by 10 grid. That’s 100 cells (This is why React is handy for something like this). Each cell is going to consist of an element that contains the front and back for a flippable card.

<div class="flippy-snap">
  <!-- 100 of these -->
  <div class="flippy-snap__card flippy-card">
    <div class="flippy-card__front></div>
    <div class="flippy-card__rear></div>
  </div>
</div>

The styles for our grid are quite straightforward. We can use display: grid and use a custom property for the grid size. Here we are defaulting to 10.

.flippy-snap {
  display: grid;
  grid-gap: 1px;
  grid-template-columns: repeat(var(--grid-size, 10), 1fr);
  grid-template-rows: repeat(var(--grid-size, 10), 1fr);
}

We won’t use grid-gap in the final demo, but, it’s good for seeing the cells easier whilst developing.

See the Pen 1. Creating a Grid by JHEY

Next, we need to style the sides of our cards and display images. We can do this by leveraging inline CSS custom properties. Let’s start by updating the markup. We need each card to know its x and y position in the grid.

<div class="flippy-snap">
  <div class="flippy-snap__card flippy-card" style="--x: 0; --y: 0;">
    <div class="flippy-card__front"></div>
    <div class="flippy-card__rear"></div>
  </div>
  <div class="flippy-snap__card flippy-card" style="--x: 1; --y: 0;">
    <div class="flippy-card__front"></div>
    <div class="flippy-card__rear"></div>
  </div>
  <!-- Other cards -->
</div>

For the demo, I'm using Pug to generate this for me. You can see the compiled HTML by clicking “View Compiled HTML” in the demo.

- const GRID_SIZE = 10
- const COUNT = Math.pow(GRID_SIZE, 2)
.flippy-snap
  - for(let f = 0; f < COUNT; f++)
    - const x = f % GRID_SIZE  
    - const y = Math.floor(f / GRID_SIZE)
    .flippy-snap__card.flippy-card(style=`--x: ${x}; --y: ${y};`)
      .flippy-card__front
      .flippy-card__rear

Then we need some styles.

.flippy-card {
  --current-image: url("https://random-image.com/768");
  --next-image: url("https://random-image.com/124");
  height: 100%;
  width: 100%;
  position: relative;
}
.flippy-card__front,
.flippy-card__rear {
  position: absolute;
  height: 100%;
  width: 100%;
  backface-visibility: hidden;
  background-image: var(--current-image);
  background-position: calc(var(--x, 0) * -100%) calc(var(--y, 0) * -100%);
  background-size: calc(var(--grid-size, 10) * 100%);
}
.flippy-card__rear {
  background-image: var(--next-image);
  transform: rotateY(180deg) rotate(180deg);
}

The rear of the card gets its position using a combination of rotations via transform. But, the interesting part is how we show the image part for each card. In this demo, we are using a custom property to define the URLs for two images. And then we set those as the background-image for each card face.

But the trick is how we define the background-size and background-position. Using the custom properties --x and --y we multiply the value by -100%. And then we set the background-size to --grid-size multiplied by 100%. This gives displays the correct part of the image for a given card.

See the Pen 2. Adding an Image by JHEY

You may have noticed that we had --current-image and --next-image. But, currently, there is no way to see the next image. For that, we need a way to flip our cards. We can use another custom property for this.

Let’s introduce a --count property and set a transform for our cards:

.flippy-snap {
  --count: 0;
  perspective: 50vmin;
}
.flippy-card {
  transform: rotateX(calc(var(--count) * -180deg));
  transition: transform 0.25s;
  transform-style: preserve-3d;
}

We can set the --count property on the containing element. Scoping means all the cards can pick up that value and use it to transform their rotation on the x-axis. We also need to set transform-style: preserve-3d so that we see the back of the cards. Setting a perspective gives us that 3D perspective.

This demo lets you update the --count property value so you can see the effect it has.

See the Pen 3. Turning Cards by JHEY

At this point, you could wrap it up there and set a simple click handler that increments --count by one on each click.

const SNAP = document.querySelector('.flippy-snap')
let count = 0
const UPDATE = () => SNAP.style.setProperty('--count', count++)
SNAP.addEventListener('click', UPDATE)

Remove the grid-gap and you’d get this. Click the snap to flip it.

See the Pen 4. Boring Flips by JHEY

Now we have the basic mechanics worked out, it’s time to turn this into a React app. There’s a bit to break down here.

const App = () => {
  const [snaps, setSnaps] = useState([])
  const [disabled, setDisabled] = useState(true)
  const [gridSize, setGridSize] = useState(9)
  const snapRef = useRef(null)

  const grabPic = async () => {
    const pic = await fetch('https://source.unsplash.com/random/1000x1000')
    return pic.url
  }

  useEffect(() => {
    const setup = async () => {
      const url = await grabPic()
      const nextUrl = await grabPic()
      setSnaps([url, nextUrl])
      setDisabled(false)
    }
    setup()
  }, [])

  const setNewImage = async count => {
    const newSnap = await grabPic()
    setSnaps(
      count.current % 2 !== 0 ? [newSnap, snaps[1]] : [snaps[0], newSnap]
    )
    setDisabled(false)
  }

  const onFlip = async count => {
    setDisabled(true)
    setNewImage(count)
  }

  if (snaps.length !== 2) return <h1 className="loader">Loading...</h1>

  return (
    <FlippySnap
      gridSize={gridSize}
      disabled={disabled}
      snaps={snaps}
      onFlip={onFlip}
      snapRef={snapRef}
    />
  )
}

Our App component handles grabbing images and passing them to our FlippySnap component. That’s the bulk of what’s happening here. For this demo, we’re grabbing images from Unsplash.

const grabPic = async () => {
  const pic = await fetch('https://source.unsplash.com/random/1000x1000')
  return pic.url
}

// Initial effect grabs two snaps to be used by FlippySnap
useEffect(() => {
  const setup = async () => {
    const url = await grabPic()
    const nextUrl = await grabPic()
    setSnaps([url, nextUrl])
    setDisabled(false)
  }
  setup()
}, [])

If there aren’t two snaps to show, then we show a “Loading...” message.

if (snaps.length !== 2) return <h1 className="loader">Loading...</h1>

If we are grabbing a new image, we need to disable FlippySnap so we can’t spam-click it.

<FlippySnap
  gridSize={gridSize}
  disabled={disabled} // Toggle a "disabled" prop to stop spam clicks
  snaps={snaps}
  onFlip={onFlip}
  snapRef={snapRef}
/>

We’re letting App dictate the snaps that get displayed by FlippySnap and in which order. On each flip, we grab a new image, and depending on how many times we’ve flipped, we set the correct snaps. The alternative would be to set the snaps and let the component figure out the order.

const setNewImage = async count => {
  const newSnap = await grabPic() // Grab the snap
  setSnaps(
    count.current % 2 !== 0 ? [newSnap, snaps[1]] : [snaps[0], newSnap]
  ) // Set the snaps based on the current "count" which we get from FlippySnap
  setDisabled(false) // Enable clicks again
}

const onFlip = async count => {
  setDisabled(true) // Disable so we can't spam click
  setNewImage(count) // Grab a new snap to display
}

How might FlippySnap look? There isn’t much to it at all!

const FlippySnap = ({ disabled, gridSize, onFlip, snaps }) => {
  const CELL_COUNT = Math.pow(gridSize, 2)
  const count = useRef(0)

  const flip = e => {
    if (disabled) return
    count.current = count.current + 1
    if (onFlip) onFlip(count)
  }

  return (
    <button
      className="flippy-snap"
      ref={containerRef}
      style={{
        '--grid-size': gridSize,
        '--count': count.current,
        '--current-image': `url('${snaps[0]}')`,
        '--next-image': `url('${snaps[1]}')`,
      }}
      onClick={flip}>
      {new Array(CELL_COUNT).fill().map((cell, index) => {
        const x = index % gridSize
        const y = Math.floor(index / gridSize)
        return (
          <span
            key={index}
            className="flippy-card"
            style={{
              '--x': x,
              '--y': y,
            }}>
            <span className="flippy-card__front"></span>
            <span className="flippy-card__rear"></span>
          </span>
        )
      })}
    </button>
  )
}

The component handles rendering all the cards and setting the inline custom properties. The onClick handler for the container increments the count. It also triggers the onFlip callback. If the state is currently disabled, it does nothing. That flip of the disabled state and grabbing a new snap triggers the flip when the component re-renders.

See the Pen 5. React Foundation by JHEY

We have a React component that will now flip through images for as long as we want to keep requesting new ones. But, that flip transition is a bit boring. To spice it up, we’re going to make use of GreenSock and its utilities. In particular, the “distribute” utility. This will allow us to distribute the delay of flipping our cards in a grid-like burst from wherever we click. To do this, we’re going to use GreenSock to animate the --count value on each card.

It’s worth noting that we have a choice here. We could opt to apply the styles with GreenSock. Instead of animating the --count property value, we could animate rotateX. We could do this based on the count ref we have. And this also goes for any other things we choose to animate with GreenSock in this article. It’s down to preference and use case. You may feel that updating the custom property value makes sense. The benefit being that you don’t need to update any JavaScript to get a different styled behavior. We could change the CSS to use rotateY for example.

Our updated flip function could look like this:

const flip = e => {
  if (disabled) return
  const x = parseInt(e.target.parentNode.getAttribute('data-snap-x'), 10)
  const y = parseInt(e.target.parentNode.getAttribute('data-snap-y'), 10)
  count.current = count.current + 1
  gsap.to(containerRef.current.querySelectorAll('.flippy-card'), {
    '--count': count.current,
    delay: gsap.utils.distribute({
      from: [x / gridSize, y / gridSize],
      amount: gridSize / 20,
      base: 0,
      grid: [gridSize, gridSize],
      ease: 'power1.inOut',
    }),
    duration: 0.2,
    onComplete: () => {
      // At this point update the images
      if (onFlip) onFlip(count)
    },
  })
}

Note how we’re getting an x and y value by reading attributes of the clicked card. For this demo, we’ve opted for adding some data attributes to each card. These attributes communicate a card's position in the grid. We’re also using a new ref called containerRef. This is so we reference only the cards for a FlippySnap instance when using GreenSock.

{new Array(CELL_COUNT).fill().map((cell, index) => {
  const x = index % gridSize
  const y = Math.floor(index / gridSize)
  return (
    <span
      className="flippy-card"
      data-snap-x={x}
      data-snap-y={y}
      style={{
        '--x': x,
        '--y': y,
      }}>
      <span className="flippy-card__front"></span>
      <span className="flippy-card__rear"></span>
    </span>
  )
})}

Once we get those x and y values, we can make use of them in our animation. Using gsap.to we want to animate the --count custom property for every .flippy-card that’s a child of containerRef.

To distribute the delay from where we click, we set the value of delay to use gsap.utils.distribute. The from value of the distribute function takes an Array containing ratios along the x and y axis. To get this, we divide x and y by gridSize. The base value is the initial value. For this, we want 0 delay on the card we click. The amount is the largest value. We've gone for gridSize / 20 but you could experiment with different values. Something based on the gridSize is a good idea though. The grid value tells GreenSock the grid size to use when calculating distribution. Last but not least, the ease defines the ease of the delay distribution.

gsap.to(containerRef.current.querySelectorAll('.flippy-card'), {
  '--count': count.current,
  delay: gsap.utils.distribute({
    from: [x / gridSize, y / gridSize],
    amount: gridSize / 20,
    base: 0,
    grid: [gridSize, gridSize],
    ease: 'power1.inOut',
  }),
  duration: 0.2,
  onComplete: () => {
    // At this point update the images
    if (onFlip) onFlip(count)
  },
})

As for the rest of the animation, we are using a flip duration of 0.2 seconds. And we make use of onComplete to invoke our callback. We pass the flip count to the callback so it can use this to determine snap order. Things like the duration of the flip could get configured by passing in different props if we wished.

Putting it all together gives us this:

See the Pen 6. Distributed Flips with GSAP by JHEY

Those that like to push things a bit might have noticed that we can still “spam” click the snap. And that’s because we don’t disable FlippySnap until GreenSock has completed. To fix this, we can use an internal ref that we toggle at the start and end of using GreenSock.

const flipping = useRef(false) // New ref to track the flipping state

const flip = e => {
  if (disabled || flipping.current) return
  const x = parseInt(e.target.parentNode.getAttribute('data-snap-x'), 10)
  const y = parseInt(e.target.parentNode.getAttribute('data-snap-y'), 10)
  count.current = count.current + 1
  gsap.to(containerRef.current.querySelectorAll('.flippy-card'), {
    '--count': count.current,
    delay: gsap.utils.distribute({
      from: [x / gridSize, y / gridSize],
      amount: gridSize / 20,
      base: 0,
      grid: [gridSize, gridSize],
      ease: 'power1.inOut',
    }),
    duration: 0.2,
    onStart: () => {
      flipping.current = true
    },
    onComplete: () => {
      // At this point update the images
      flipping.current = false
      if (onFlip) onFlip(count)
    },
  })
}

And now we can no longer spam click our FlippySnap!

See the Pen 7. No Spam Clicks by JHEY

Now it’s time for some extra touches. At the moment, there’s no visual sign that we can click our FlippySnap. What if when we hover, the cards raise towards us? We could use onPointerOver and use the “distribute” utility again.

const indicate = e => {
  const x = parseInt(e.currentTarget.getAttribute('data-snap-x'), 10)
  const y = parseInt(e.currentTarget.getAttribute('data-snap-y'), 10)
  gsap.to(containerRef.current.querySelectorAll('.flippy-card'), {
    '--hovered': gsap.utils.distribute({
      from: [x / gridSize, y / gridSize],
      base: 0,
      amount: 1,
      grid: [gridSize, gridSize],
      ease: 'power1.inOut'
    }),
    duration: 0.1,
  })
}

Here, we are setting a new custom property on each card named --hovered. This is set to a value from 0 to 1. Then within our CSS, we are going to update our card styles to watch for the value.

.flippy-card {
  transform: translate3d(0, 0, calc((1 - (var(--hovered, 1))) * 5vmin))
             rotateX(calc(var(--count) * -180deg));
}

Here we are saying that a card will move on the z-axis at most 5vmin.

We then apply this to each card using the onPointerOver prop.

{new Array(CELL_COUNT).fill().map((cell, index) => {
  const x = index % gridSize
  const y = Math.floor(index / gridSize)
  return (
    <span
      onPointerOver={indicate}
      className="flippy-card"
      data-snap-x={x}
      data-snap-y={y}
      style={{
        '--x': x,
          '--y': y,
      }}>
      <span className="flippy-card__front"></span>
      <span className="flippy-card__rear"></span>
    </span>
  )
})}

And when our pointer leaves our FlippySnap we want to reset our card positions.


const reset = () => {
  gsap.to(containerRef.current.querySelectorAll('.flippy-card'), {
    '--hovered': 1,
    duration: 0.1,
  })
}

And we can apply this with the onPointerLeave prop.

<button
  className="flippy-snap"
  ref={containerRef}
  onPointerLeave={reset}
  style={{
    '--grid-size': gridSize,
    '--count': count.current,
    '--current-image': `url('${snaps[0]}')`,
    '--next-image': `url('${snaps[1]}')`,
  }}
  onClick={flip}>

Put that all together and we get something like this. Try moving your pointer over it.

See the Pen 8. Visual Inidication with Raised Cards by JHEY

What next? How about a loading indicator so we know when our App is grabbing the next image? We can render a loading spinner when our FlippySnap is disabled.

{disabled && <span className='flippy-snap__loader'></span>}

He styles for which could make a rotating circle.

.flippy-snap__loader {
  border-radius: 50%;
  border: 6px solid #fff;
  border-left-color: #000;
  border-right-color: #000;
  position: absolute;
  right: 10%;
  bottom: 10%;
  height: 8%;
  width: 8%;
  transform: translate3d(0, 0, 5vmin) rotate(0deg);
  animation: spin 1s infinite;
}
@keyframes spin {
  to {
    transform: translate3d(0, 0, 5vmin) rotate(360deg);
  }
}

And this gives us a loading indicator when grabbing a new image.

See the Pen 9. Add Loading Indicator by JHEY

That’s it!

That’s how we can create a FlippySnap with React and GreenSock. It’s fun to make things that we may not create on a day-to-day basis. Demos like this can pose different challenges and can level up your problem-solving game.

I took it a little further and added a slight parallax effect along with some audio. You can also configure the grid size! (Big grids affect performance though.)

See the Pen 3D CSS Flippy Snaps v2 (React && GSAP) by JHEY

It’s worth noting that this demo works best in Chromium-based browsers.

So, where would you take it next? I’d like to see if I can recreate it with Three.js next. That would address the performance. 😅

Stay Awesome! ʕ•ᴥ•ʔ

30 Nov 2021 | 1:30 am

How To Maintain A Large Next.js Application

Maintaining a large application is always a difficult task. It might have outdated dependencies which can cause maintainability issues. It can also have tests that are flaky and don’t inspire any confidence. There can also be issues with large JavaScript and CSS bundles causing the application to provide a non-optimal user experience for the end-users.

However, there are a few ways in which you can make a large code-base easy to maintain. In this article, we will discuss a few of those techniques as well as some of the things I wish I had known earlier to help manage large Next.js applications.

Note: While this article is specific to Next.js, some of the points will also work for a wide variety of front-end applications.

Use TypeScript

TypeScript is a strongly typed programming language which means that it enforces certain strictness while intermixing different types of data. According to StackOverflow Developer Survey 2021, TypeScript is one of the languages that developers want to work with the most.

Using a strongly typed language like TypeScript will help a lot when working with a large codebase. It will help you understand if there is a possibility that your application will break when there is a change. It is not guaranteed that TypeScript will always complain when there is a chance of breakage. However, most of the time, TypeScript will help you eliminate bugs even before you build your application. In certain cases, the build will fail if there are type mismatches in your code as Next.js checks for type definition during build time.

From the Next.js docs:

“By default, Next.js will do type checking as part of the next build. We recommend using code editor type checking during development.”

Note that next build is the script that creates an optimized production build of your Next.js application. From my personal experience, it helped me a lot when I was trying to update Next.js to version 11 for one of my applications. As a part of that update, I also decided to update a few other packages. Because of TypeScript and VSCode, I was able to figure out when those breaking changes even before I had built the application.

Use A Mono-Repo Structure Using Lerna Or Nx

Imagine that you are building a component library along with your main Next.js application. You might want to keep the library in a separate repository to add new components, build and release them as a package. This seems clean and works fine when you want to work in the library. But when you want to integrate the library in your Next.js application, the development experience will suffer.

This is because when you integrate the component library with your Next.js application, you might have to go back into the library’s repository, make changes, release the updates and then install the new version in your Next.js application. Only after that, the new changes from the component library will start reflecting in the Next.js application. Imagine your whole team doing this multiple times. The amount of time spent on building and releasing the component library separately will add up to a huge chunk.

This problem can be resolved if you use a mono-repo structure where your component library resides with your Next.js application. In this case, you can simply update your component library and it will immediately reflect in your Next.js application. There is no need for a separate build and release of your component library.

You can use a package like next-transpile-modules so that you don’t even need to build your component library before your Next.js application can consume it. However, if you are planning to release your component library as an npm package, you might need to have a build step.

For managing a mono-repo, you can use tools like Lerna, Nx, Rush, Turborepo, yarn workspaces, or npm workspaces. I liked using Lerna together with yarn workspaces when I needed to configure my build pipeline. If you prefer something which will automate a bunch of things via CLI, you can take a look at Nx. I feel that all of them are good but solve slightly different problems.

Use Code Generators Like Hygen To Generate Boilerplate Code

When a lot of developers start contributing to a large code-base, there is a good chance that there will be a lot of duplicate code. This happens mainly because there is a need to build a page, component, or utility function which is similar to an already existing one with slight modifications.

You can think of writing unit test cases for your components or utility functions. You might want to copy the boilerplate code as much as possible and do certain modifications as per the new file. However, this adds a lot of code consisting of bad variable naming in your code-base. This can be reduced by a proper code-review process. However, there is a better way to reduce this by automating the generation of the boilerplate code.

Unless you are using Nx, you will need to have a way in which you can automate a lot of code generation. I have used Hygen to generate the boilerplate code for Redux, React components, and utility functions. You can check out the documentation to get started with Hygen. They also have a dedicated section for generating Redux boilerplate. You can also use Redux Toolkit to reduce the boilerplate code necessary for your Redux applications. We will discuss this package next.

Use A Well-Established Pattern Like Redux With Lesser Boilerplate Via Redux Toolkit

Many developers will argue that Redux increases the complexity of the code-base or React Context is much easier to maintain. I think that it depends mostly on the type of application that you are building as well as the expertise of the whole development team. You can choose whatever state management solution your team is most comfortable with, but try to choose one that doesn’t need to have a lot of boilerplate.

In this article, I’m mentioning Redux because it is still the most popular state management solution out there according to npm trends. In the case of Redux, you can reduce a lot of boilerplate code by using Redux Toolkit. This is a very opinionated and powerful library that you can use to simplify your state management. Check out their documentation regarding how to get started with Redux Toolkit.

I have used Redux, Zustand, and Redux Toolkit while building Next.js applications. I feel that Zustand is very simple and easy to understand. However, I still use Redux in case I need to build something complex. I haven’t used XState but it is also a popular choice.

Use React Query Or SWR For Fetching Async Data

Most front-end applications will fetch data from a back-end server and render it on the page. In a Next.js application or any JavaScript application, you can fetch data using the Fetch API, Axios, or similar libraries. However, as the application grows, it becomes very difficult to manage this async state of your data. You might create your abstractions using utility functions or wrappers around Fetch or Axios but when multiple developers are working on the same application, these utility functions or wrappers will soon become difficult to manage. Your application might also suffer from caching, and performance issues.

To resolve these kinds of issues, it is better to use packages like React Query or SWR. These packages provide a default set of configurations out of the box. They handle a lot of things like caching and performance which are difficult to manage on your own. Both of these packages provide some default configuration and options which you can use to customize their behaviors according to the requirements of your application. These packages will fetch and cache async data from your back-end API endpoints and make your application state much more maintainable.

I have used both React Query and SWR in my projects and I like both of them. You can take a look at their comparison and features to decide which one you should use.

Use Commitizen And Semantic Release With Husky

If you deploy and release your application often, then you might have encountered issues with versioning. When you are working on a big application and multiple developers are contributing to it, managing releases becomes even more difficult. It becomes very difficult to keep track of the changelog. Manually updating the changelog becomes very difficult and slowly your changelog becomes out of date.

You can combine packages like Commitizen and Semantic Release to help you with versioning and maintaining a changelog. These tools help you in automating part of your release process by keeping the changelog in sync with what changes were deployed in a particular release. You can use a tool like Husky to ensure that all the contributors are following the established pattern for writing commit messages and helping you in managing your changelog.

Use Storybook For Visualizing UI Components

In a large code-base, your application will most likely consist of a lot of components. Some of these components will be outdated, buggy, or not necessary anymore. However, it is very difficult to keep track of this kind of thing in a large application. Developers might create new components whose behavior might be similar to an already existing component because they don’t know that the previous component exists. This happens often because there is no way to keep track of what components the application currently has and how they interact with each other.

Tools like Storybook will help you keep track of all the components that your code-base currently consists of. Setting up Storybook is easy and can integrate with your existing Next.js application. Next.js has an example that shows how to set up Storybook with your application.

I have always liked using Storybook because it helps my team of developers understand how each component behaves and what APIs it exposes. It serves as a source of documentation for every developer. Storybook also helps designers understand the behavior of all the components and interactions. You can also use Chromatic along with Storybook for visual testing and catching regression issues during each release.

Recommended Reading: Building React Apps With Storybook” by Abdulazeez Adeshina

Write Maintainable Tests From The Start

Writing tests consumes time. As a result, many companies tend not to invest time in writing any sort of test. Because of this, the application might suffer in the long run. As the application grows, the complexity of the application also increases. In a complex application, refactoring becomes difficult because it is very hard to understand which files might break because of the changes.

One solution to this problem would be to write as many tests as possible from the start. You can follow Test Driven Development (or TDD),software%20against%20all%20test%20cases.) or any other similar concept that works for you. There is an excellent article The Testing Trophy and Testing Classifications by Kent C. Dodds which talks about different types of tests that you can write.

Although writing maintainable tests take time. But I think that tests are very essential for large applications as it gives developers the confidence to refactor files. Generally, I use Jest, React Testing Library, and Cypress for writing tests in my application.

Use Dependabot To Update Packages Automatically

When multiple feature teams contribute to the same application, there is a good chance that the packages used in it will become outdated. This happens because if there are any breaking changes while updating packages, there is a possibility that a considerable amount of time needs to be invested in doing that update. This might result in missing deadlines for shipping features. However, this practice might hurt in the long run. Working with outdated packages can cause a lot of issues like security vulnerabilities, performance issues, and so on.

Fortunately, tools like Dependabot can help your team by automating the update process. Dependabot can be configured to check for outdated packages and send updated pull requests as often as you need. Using tools like Dependabot has helped me a lot in keeping the dependencies of my applications updated.

Things I Wish I Had Known Earlier

There are many things that I wish I had known earlier while building the Next.js application. However, the most important is the going to the Production section of the Next.js documentation. This section outlines some of the most important things that one should implement before deploying a Next.js application to production. Before I read this section, I used to arbitrarily guess about what to do before deploying any application to production.

Always check what browsers you need to support before deploying your application to production and shipping them to your customers. Next.js supports a wide range of browsers. But it is essential to understand what type of users you are shipping your application to and what type of browsers they use.

Conclusion

These are some of the things that I learned while building and maintaining a large Next.js application. Most of these points will apply to any front-end application. For any front-end application, the main priority should always be shipping a product that has a very good user experience, is fast, and feels smooth to use.

I try to keep all these points in mind whenever I develop any application. I hope that they’ll prove to be useful to you, too!

27 Nov 2021 | 1:30 am

A Showcase Of Lovely Little Websites

A map that blends past and present, a musical time machine bringing back distant memories, or an interactive graphic novel pulling you deeper and deeper into a powerful story — sometimes you come across a lovely little website that, well, instantly conquers your heart. It doesn’t necessarily have to be overly useful or practical. Instead, its true value shines in the experience you get from it. It might leave you with your jaw dropped, with a smile on your face, surprised, excited, or inspired.

In this post, we collected lovely little sites like these, found in the remote corners of the web. They are perfect for a short coffee break or whenever you’re up for a little bit of diversion. We hope you’ll enjoy them. Oh, and if you’ve come across a website that you feel is too good to keep to yourself, please don’t hesitate to share it in the comments below. We’d love to hear about it!

Plant Guides, From A To Z

Every office, and that includes home office as well, is better off with a lovely selection of beautiful plants. But which plants are easier to deal with for some of us who tend to be forgetful? Which ones require more care, and if so, what does it usually involve?

How Many Plants is a wonderful resource that covers all these questions well. It provides a thorough overview of all popular plants, sorted alphabetically and by care difficulty. You can even filter out plants based on their features (size, format, placement), plant type (traits, origins, pet-friendly) and leaf look (shape and surface). A great reference site to keep nearby.

Covid Art Museum

Of course, design isn’t quite like art. While design tries to solve a particular problem, art makes us think and feel — provoking us and questioning the status quo. But art can also bring around new perspectives and change in times when it’s so much needed.

The Covid Art Museum is a growing online exhibition of art born during Covid-19 quarantine, now with 238 contributions by people from all around the world. Often it’s an attempt to cope with the world around us, and perhaps take a slightly different perspective of how the changed world changed our perception of the world and our lives.

Museum Of Annoying Experiences

How often do you feel frustrated these days? How often do you open a browser window just to find yourself stuck identifying fire hydrants and understanding confusing sentences? Or perhaps calling a customer support service just to be put on hold for half an hour (at best)?

The Museum of Annoying Experiences takes us on a journey to the year 3000 when bad customer service is nothing but a distant memory, only observable in the exhibits that show how things used to be in the past (well, today) when most interactions were incredibly annoying. Each exhibit is interactive and playful, taking a fun look at frustrations around us. Who knows: hopefully in the year 3000, all these annoying experiences will indeed be distant.

The Musical Time Machine

It’s still quite difficult to travel back in time, but fortunately, we can do so online. What if you wanted to listen to the pop charts extravaganza from the US back in 1955 or Uzbekistan in 1932? Well, Radiooooo has got your back (well, you might need to sign up for a free basic plan first).

The website is a collection of songs collected over decades and now searchable, with filters by genre, speed, country, and time period. In fact, you can search by slow for chilling, fast for dancing, and weird music for bugging out — indeed, there is something for everyone! And if you want to go fancy, there is a shuffle mode, with songs picked by the curators.

UX Misconceptions And Laws

When we design experiences on the web, usually we rely on things that worked well in the past. Of course, we don’t know for sure how well our solutions worked, and we don’t know if they’d perform well next time around. But out of our experiences views emerge, and then as they find ground, they become more established over time. And sometimes, this is exactly how misconceptions appear.

10 misconceptions on UX” highlights common views and data around infinite scrolling, making everything accessible from the homepage, original design, mobile-first and user interviews, among others. Admittedly, the creators of the site are quite opinionated, and you might disagree with some statements, but the website is fun to play with, and there are dozens of random fun facts to explore as well.

Also, if you’d like to deep-dive into common principles and heuristics of UX, Jon Yablonski has collected dozens of Laws of UX in his beautiful website, featuring everything from Hick’s Law and Law of Common Region to Tesler’s Law and Zeigarnik Effect. Wonderful resources worth keeping close!

The Timeline Of The Web

The web has been going through quite a few changes over the last three decades. You might remember Perl 5, Firebug, Backbone.js, and the end of Flash, but very often most things we’ve experienced on the web appear quite blurry, as they were changing so quickly.

In The History of the Web, Jay Hoffman, with illustrations by Katerina Limpitsouni, celebrates the most important events in the web’s young history. It’s an evolving timeline that charts the events on a timeline, with useful resources and links to follow-up and review. A lovely little project to keep bookmarked.

Sounds To Help You Focus

Staying focused might easily be one of the biggest challenges when you need to get work done. If you’re working from home and are missing the familiar office sounds, I Miss The Office brings some office atmosphere into your home office — with virtual colleagues who produce typical sounds like typing, squeaking chairs, or the occasional bubbling of the watercooler.

The Boat: A Powerful Piece Of Storytelling

Some stories are so dense, so intense, that they capture you and don’t let you go. “The Boat” is such a story. Based on the short story by Nam Le, “The Boat” combines animation, audio, and ink and charcoal drawings into a powerful, interactive graphic novel.

The story told is the one of Mai, a girl whose parents send her alone on a boat to Australia after the Vietnam War. And, well, the storytelling experience really is exceptional. Each little element, each thoroughly applied animation contributes to creating an atmosphere that reflects the fear, despair, but also the hope that is linked to the escape. Take some time and see for yourself. It’s worth it.

Interactive Timeline... In Dots!

Dots, dots, and even more dots. But these are not just any ordinary dots. Every dot is a historic event, so you can just imagine how the whole picture looks like if you step back and take a look at Histography. This impressive interactive timeline spans across 14 billion years of history, from the Big Bang to the 2010s. What started out as a simple project in the Bezalel Academy of Arts and Design by Ronel Mor, has now turned out to be a leading example of what creative timelines can actually look like.

All of the historical events shown in the interface have been drawn from Wikipedia and new recorded events are added on a daily basis. Not only does it allow you to skip between decades to millions of years, but you can also choose to watch a variety of events which have happened in a particular period or target a specific event in time.

Designing A Galaxy Far Far Away

Take 22 Illustrator files that measure 1024 × 465152 px combined, put in 1000 hours of work, and add the story of Star Wars Episode IV. What you’ll get is a project that will make your jaw drop: SWANH. Brought to life by illustrator and graphic novelist Martin Panchaud, SWANH tells the whole story of “Star Wars: A New Hope” in a huge infographic that requires 403.5ft (123m) of scrolling to get from top to bottom. And, well, it’s worth it.

Made up of 157 images, the sheer dimensions of the piece are impressive, and so is the love to detail that Martin Panchaud put into creating the Star Wars universe. But SWANH is more than eye candy for Star Wars lovers. It’s also an experiment that wants to create a contrast to what we usually expect on the web: quickly understandable contexts and short stories. Brilliant.

Design Facts That You Didn’t Know About

Humankind has always created, however, the design craft as we know and practice it today is a rather young discipline. But that doesn’t mean it doesn’t have a lot of stories to tell. The project Design Facts by writer and art director Shane Bzok reveals them by serving bite-sized pieces of design history that you probably haven’t heard of yet.

Did you know, for example, that the logo for the Spanish lollipop company Chupa Chups was designed by Salvador Dali in 1969? That the Adobe founders named their company after a creek that ran behind the house of one of the founders? Or that the logo of the Chanel brand with its interlocking C’s originally adorned the building of a French vineyard and that Coco Chanel was granted permission by the vineyard owner to use it for her brand in the early 20’s? These are only three of the more than 130 surprising and informative design facts that Shane Bzok has collected. Perfect to squeeze into a short coffee break.

The Beauty Of Vintage Control Panels

An old phone with a dial plate, a tape deck with a grid of buttons, an electricity control room with hundreds of bulbs — vintage electronics have a fascinating charm to them. In praise of all those dials, toggles and buttons that made and shaped the tech design of the past century, Stephen Coles and Norman Hathaway dedicated a Tumblog solely to vintage control panels.

As you’ll see, browsing the collection feels like opening a time capsule. Apart from car dashboards and tech magazine covers of the 80s that still seem (fairly) familiar, you’ll find gems like four-buttoned remote controls from the 40s or retro-futuristic concepts, among them a smartwatch from the 80s that is essentially a shrunken PC worn on the wrist. A fun journey through the history of interface design. Leaves us with the question what people will think about our state-of-the-art gadgets and UIs in 50 years from now.

Little Moments Of Happiness

Did you ever cool off a lion with a fan? It might sound weird, but, well, we did. And what can we say? The lion loved it! The refreshing breeze made his mane dance and brought a big smile to his face. Don’t believe it? Well, go ahead, and try for yourself.

The lion is part of the WebGL project named “Moments of Happiness”, which was brought to life by EPIC Agency. He and five of his animal friends — a sneezing dragon, a playful cat, a paranoid bird, a valorous rabbit and a mighty fish — are bound to put a smile to your face, too, as you interact with them. To breathe life into the odd yet lovable bunch, the experiments use Three.js and the GSAP Library. If you want to take a closer look under the hood, the source codes are available on Codepen. Watch out, though: They are not fully optimized and might not work in some browsers or devices.

Monochromatic Eye Candy

Who doesn’t love some good eye candy? If you need some fresh inspiration, be sure to stop by the Tumblr of The Afrix. Curated by designer Tom Wysocki, the Tumblr resembles a well-balanced exhibition of opposites — black and white, strict geometry and fluent, organic shapes — joining up to build a harmonious whole.

Among the works, you’ll find actual designs for portfolio websites and detailed illustrations, but also rather abstract and seemingly random digital experiments. It’s that mix of the unforeseen that makes the showcase so refreshing despite its monochromatic color palette. Beautiful works of art with a mysterious touch.

An Alphabetical Adventure

“A” is for “Albert”, “B” is for “Bounce”, “C” for “Cowabunga”. If you have no idea what all of this is about, well, no worries, we’ll tell you: it’s the beginning of a very special piece of eye candy. Brought to life by design agency Studio Lovelock, “A Is For Albert” explores the moments of happiness — and the little mishaps — that life with kids brings along — with an animated alphabet.

Each letter from A to Z tells the story of how Albert, a blonde little boy, explores the world in his own cute yet chaotic (and seen from his parents’ perspective sometimes maybe even a bit annoying) way. He decorates the livingroom wallpaper with his brush artworks, for example, and shows his love for the family cat by hugging it a bit too tight. Simple geometric shapes and a soft color palette are everything the project needs to breathe life into Albert’s (and his parents’) everyday adventures and make us smile.

Blending Past And Present

Maps can do more than help us find the way. They are witnesses of their time and, when we look at old maps, it’s like taking a trip back into long forgotten days. Now imagine that you had a magic spyglass that could show you what your neighborhood looked like 100 years ago. You’d only need to get out a recent map, hover your spyglass over it, and see what has changed.

Well, actually, that’s possible. The National Library of Scotland provides a browser-based tool that lets you jump between the same area on a recent and a vintage map just looking through a (digital) spyglass. The service works for maps of Great Britain, Scotland, England, and Wales. A fantastic way to see the world (and maybe even your neighborhood) from a different perspective.

Do You Have The Design Eye?

So, you think no-one is better than you when it comes to assessing if something is centered or slightly off? Well, then here’s a challenge for you: It’s Centred That. The little game created by the folks at the UX design and web development studio Supremo takes your design eye to the ultimate test: You’re presented with shapes and need to decide if the dot is placed in the center. But beware, what sounds easy, is actually harder than you’d think. Will you make it through all 10 levels?

Patterns In Islamic Art

The Islamic world has brought forth an incredibly rich heritage of architectural decoration, a heritage that deserves to be better known and that has a lot to offer not only to art historians, as David Wade points out. To make the beauty accessible to everyone, he started Pattern in Islamic Art, a showcase of more than 4,000 images of patterns and other design features drawn from this artistic tradition. No matter if you are up for some eye candy or want to investigate the underlying construction of the complex geometries, the site is a real treasure chest.

Print Design Inspiration From The Past

Typography, layout, color, patterns — vintage magazines provide an endless source of inspiration. If you’re up for some eye candy, the folks at Present & Correct have collected a selection of print design goodies over time.

Among them are covers from the East German design magazine Form + Zweck which was published between 1956 and 1990, just like covers of Switzerland’s oldest typographic journal Typographische Monatsblätter. The Japanese magazine Industrial Art News with its bold and vibrant cover art is also part of the collection. For some more contemporary inspiration, be sure to also check out the site of the Japanese IDEA magazine where you can peek inside past issues and even browse them by keyword. Eye candy to get lost in.

A Curated Gallery Of Patterns

When bold colors meet subtle palettes, organic curves appear next to sharp-edged geometric forms, and minimalist designs face playful artworks, inspiration isn’t far. If you’re up for a surprise bag of inspiration, Pattern Collect is for you. The site curates beautifully illustrated patterns created by designers from across the globe.

You can browse the showcase by tag and, if you like an artwork, a link takes you to the original on Dribbble or Behance where you can learn more about the illustrator and their work. Who knows, maybe this will even turn out to be the opportunity to find creative talent to work with on an upcoming project?

A Trip Back To The Early-Days Of Computing

You’re in the mood for some tech nostalgia? Well, then PCjs will be your kind of thing. The open-source project revives the times when computers came with a monochrome display and ran on 4.77Mhz and 64KB of RAM. And the best: It’s no showcase, but you can actually interact with the machines right in your browser. The simulations of the Original IBM PC from 1981 and the OSI Challenger 1P from 1978 were written entirely in JavaScript and require no additional plugins — no Java, no Flash.

The pre-configured machines are ready to run BASIC, DOS, Windows 1.01, and assorted non-DOS software, and, if that’s not enough control for you yet, you can even build your own PC. The goal of the project is to help people understand how these early computers worked and to make it easy to experiment with them. It also provides a platform for running and analyzing older software. Now that’s really a trip down the memory lane.

A Rainbow Of Cover Artwork

By pairing hex color values with album cover art of 2020, you’ll have the foundation for a very special project: Album Colors Of The Year. It arranges some of last year’s album releases by color to create a rainbow of cover artwork.

Lady Gaga’s album “Chromatica”, for instance, is a case of #ed4c73, Suuns’ “Fiction” shines in #e489b3, and Avalon Emerson’s “DJ-Kicks” screams #f8bb04. In times when album covers often live rather unnoticed in the corner of our smartphone screens, it is nice to see their artwork in the center of attention for a change. A great place to seek fresh inspiration — or just discovering some new tunes to get you through a lengthy coding session.

Teletext Time Travel

Do you remember the times when you switched to the teletext for the weather forecast or the sports results? The loud colors on the black background, pixel art graphics, and flashing text? (Well, you might not, and it’s perfectly fine!) The Teletext Museum is the perfect place to revive these memories or discover them (if you live outside Europe, for example).

If we didn’t have the web, most of us would still be teletext designers and developers since essentially each teletext page is a box with content in it. Sounds familiar? Well, the gallery with images from teletext services from around the world illustrates how the interface design has evolved over time and a timeline gives you more information on what exactly changed and how.

If you ever wanted to take on the role of a teletext designer, well, you can do that, too. Jason Robertson who recovers old teletext data from VHS cassettes in a complicated and time-consuming process provides a plethora of teletext pages from the 80’s and 90’s. Some of them can be edited right in the browser. The process needs some getting used to, but it’s definitely a fun trip back in time.

The Lives Of Famous Painters

When we hear names like Picasso, Dalí, or Miró, we immediately remember some of their paintings. But what do we actually know about the artists behind the masterpieces? About their lives and love, the events that shaped them and their works? To visualize painter’s lives, information designer Giorgia Lupi and her team at Accurat teamed up with illustrator Michaela Buttignol. The result of the collaboration is a stunning series of minimalist infographics that boil the biographies of ten famous painters down to their cornerstones.

The visualizations depict key moments — births, deaths, love affairs, marriages, children, travels — but also interesting tidbits such as astrological sign, left/right handedness as well as connections and influences. By picking up the characteristic colors and other stylistic preferences of each artist, the designs also reflect the painters’ styles. A fun way to dive deeper into the history of art. If you’d like to learn more about creating engaging infographics like this one, you should also check out Giorgia Lupi’s article on the aesthetics of data narratives.

The Museum Of The World

The Rosetta Stone, the Parthenon sculptures, Egyptian mummies — all of them cornerstones of human culture which can be admired in the British Museum today. Comprising more than 2,000,000 years of human history, its collection is exceptional and one of the largest of its kind. To make that cultural heritage accessible to more people from all over the world, the British Museum has partnered up with Google. The result: the Museum Of The World.

The WebGL-powered desktop experience explores connections between the world’s cultures by showcasing exhibits that shaped human history. As you travel deeper into the history of mankind with each scroll, you can browse the artefacts according to type and area of origin — no matter where in the world they might be located. Stunning.

Bringing Imaginations To Life

Guess what happens if 100 kids draw monsters and 100 illustrators bring those imaginations to life? Probably something hilarious and very refreshing. Katherine Johnson did just that: She invites Elementary students to draw monsters, and once their creations have taken shape, she works with illustrators to bring them to life in their unique artistic styles.

The ultimate goal of The Monster Project as the project is called is to help children recognize the value of their ideas and make them feel excited about the creative potential of their own minds. At the moment, the site features over 100 monsters created by over 100 artists from all over the world. Now, are you feeling inspired already?

25 Nov 2021 | 1:00 am

Adding A Dyslexia-Friendly Mode To A Website

Dyslexia is perhaps the most common learning disorder in the world, affecting somewhere between 10–20% of the world’s population. It can cause difficulties with reading, writing, and spelling, though the degree of impairment varies widely — some people are barely affected while others require a great deal of extra support.

Existing best practices and guidance, such as the Web Content Accessibility Guidelines (WCAG), give us a solid foundation for inclusive design and already incorporate many details that affect dyslexic readers. For example, WCAG guidance around line length and spacing match the recommendations I found doing my research. In fact, some of those resources are linked in the Understanding WCAG 2.1 document which provides extended commentary on the guidelines.

We can build upon those foundations to offer more focused support for different communities, making it easier to engage with our websites on their own terms. In this article, we’ll look at ways to make an existing design dyslexia-friendly.

This article builds on English-language research and can be generalized to cover most European languages that use Latin and Cyrillic scripts. For other languages and scripts, you will find you need to tailor or even ignore these guidelines.

Font Selection
“The font for the body copy should be chosen for its on-screen readability, before any concern for style.”

— “How To Apply Macrotypography For A More Readable Web Page,” Nathan Ford

When I first started researching this topic, I incorrectly believed that I would have to limit my font selection. Luckily, research shows that standard fonts like Helvetica and Times New Roman are just as readable as purpose-built fonts like Dyslexie or Open Dyslexic.

What this means for your font selection is that you merely have to select fonts with legibility in mind.

All right, problem solved, let’s go home!

Well, not really. It turns out there is something special about those purpose-built fonts.

Whitespace
“It seems that at least for some people with dyslexia, they are vulnerable to a phenomenon called ‘visual crowding’ when they read.”

Dr Jenny Thomson

While study after study shows little benefit from the choice of font, they also consistently show spacing between letters and words as the most important factor in supporting a dyslexic reader. Jon Severs has written a very good overview of these studies with quotes from many of the leading researchers.

The popularity of Comic Sans in the dyslexic community seems to be driven by the wider spacing found in that font, spacing that has been built into additional fonts intended for their community.

As designers, we have the power to extend this spacing to any font, letting us support our readers without a major redesign. While we’re at it, we can further improve things by reducing distractions and design choices that can produce the visual crowding that affects dyslexic readers.

An Existing Design

The following CodePen example shows a fun little design with semantic and accessible markup that received 100% from a Lighthouse audit. It follows best practices, tries to present a strong visual identity, has good contrast levels, and uses Overpass for headings and body, which provides a unified and legible sans serif family of typefaces:

See the Pen Dyslexia-unfriendly design by John C Barstow.

This will be our starting point, which we will extend to build our dyslexia-friendly version.

Initial Changes

We want the entire document to work together to support our dyslexic readers, so we will begin by adding a class to the body element.

<body class="dyslexia-mode">

This will allow us to easily toggle our new changes on and off via JavaScript and makes it easy to locate the relevant CSS rules.

The British Dyslexia Association published a style guide in 2018 which we can use as a starting point:

“Larger inter-letter / character spacing (sometimes called tracking) improves readability, ideally around 35% of the average letter width.”

“Inter-word spacing should be at least 3.5 times the inter-letter spacing.”

The ch unit in CSS is based on the advance of the 0 glyph, but in practice for proportional fonts can often be used as an approximation of the average character width. If you’re using a font with a particularly narrow or wide zero, you may find you need to adjust the numbers below.

We’re using Overpass in our example, which has a fairly standard zero, so we can express the recommended numbers directly:

.dyslexia-mode {
    letter-spacing: 0.35ch;
    word-spacing: 1.225ch; /* 3.5x letter-spacing */
}

Modern browsers default to enabling a font’s common ligatures, and older browsers will do so if you use the unofficial text-rendering: optimizeLegibility property. For most of us, this improves legibility as it merges close-set characters into a single glyph. For example, ‘f’ and ‘i’ are often merged to create ‘fi’.

Dyslexic readers, on the other hand, may struggle to recognize the ligature as two letters, especially as we have increased the spacing, making ligatures stand out even more than usual. While some browsers may automatically disable ligatures as a result of the increased letter spacing, for consistent behavior we should explicitly disable them ourselves via CSS:

.dyslexia-mode {
    letter-spacing: 0.35ch;
    word-spacing: 1.225ch; /* 3.5x letter-spacing */
    font-variant-ligatures: none; /* explicitly disable ligatures */
}
Line Spacing

The WCAG guidelines suggest a minimum line height of 1.5, with a paragraph setting at least 1.5 times larger than the line spacing.

Following this guidance is already quite helpful for your dyslexic readers, but that minimum value is based on the standard word spacing. Since we’re increasing the word spacing, we should increase the line height proportionally.

I find a line-height of 2.0 works quite well. It’s a little more than the BDA guidance of 1.5x the word spacing, unitless as suggested by MDN documentation, and easy to sync up to a design’s vertical rhythm.

To achieve the recommended amount of paragraph spacing, in this example we apply a top margin on our p elements. In a larger project, you might want to use Heydon Pickering’s famous owl selector, especially if you have nested content.

Following the WCAG suggestion, that top margin should be a minimum of 3em to get the desired paragraph spacing. After feedback from my dyslexic reader, I increased this to 3.5em which was more comfortable for them.

As with any inclusive design, feedback from real users is critical to ensuring the best results.

While we could apply these settings to our entire page, I prefer to target them to the main content area, especially when modifying an existing design. Site headers, footers, and navigation tend not to have paragraph content and can be particularly sensitive to vertical whitespace changes.

.dyslexia-mode main {
   line-height: 2.0;
}

.dyslexia-mode main p {
   margin-top: 3.5em;
}
Other Typographic Changes

At this point, we’ve made the large-scale changes that will have the biggest impact on a dyslexic reader. Now we can turn our thoughts to the smaller touches that help refine a design.

The extra whitespace we’ve introduced will make many fonts appear lighter, thinner, or lower contrast, so we can increase the font weight or adjust the color to compensate.

.dyslexia-mode {
  font-weight: 600; /* demi-bold */
}

This in turn may make bold (at a font-weight of 700) harder to distinguish. You could make it a heavier bold by increasing the font-weight or distinguishing it in some other way like changing the size or color. For my design, I chose to leave it at the same weight, but make it darker than the regular text.

.dyslexia-mode strong {
  color: #000;
}

Now is a good time to use your developer tools to check your contrast. For dyslexic readers, you should aim for a contrast ratio of at least 4.5:1, which corresponds to the WCAG 2.1 minimum contrast guidelines.

Why the minimum guidelines? Well, there are two issues to consider. One is that at very high contrast ratios some dyslexic readers will see their text blurring or swirling. This is known as the “blur effect”. This is one of the reasons that the BDA style guide we referenced earlier recommends avoiding pure black text or pure white backgrounds.

The second consideration is that many dyslexic readers find larger font sizes more readable. Research suggests a base size of 18pt, which meets the WCAG definition of large-scale text and therefore a contrast ratio of 4.5:1 will still meet the enhanced contrast guidelines.

Which reminds us, we should bump up that base font size!

.dyslexia-mode {
  font-size: 150%; /* assuming 16px base size, convert to 18pt */
}

Responsive designs tend to scale well with browser zoom settings, so a different strategy here could be to leave your font size untouched and suggest that your readers increase the page zoom in their browser.

Following the WCAG guidelines means that our design does not use justified text, so we don’t have to make an adjustment. Because justification can alter the spacing between letters and words, if you have used it, you should ensure you disable it in a dyslexia-friendly mode.

Reduce Clutter

The extra whitespace we’ve been adding makes it easier to focus on letters and words. That implies that we can be even more helpful by reducing the amount of confusing, cluttered, or potentially distracting things in our design.

Best practices in web design tend to emphasize progressive enhancement and mobile-first design, which helps keep page weights down and makes web pages resilient. These practices naturally lead to a minimal default state with fewer decorations and distractions (because these would overwhelm a small screen). We can preserve this minimal state in our dyslexia-friendly mode.

For the background, this means defaulting to a solid color and using the :not pseudo-class in our enhancements to avoid applying them to our new mode.

We can use similar constructs to avoid the creation of decorative borders and shadows, leaving only those that are functionally necessary.

@media(min-width:700px) { / only apply on wider screens... /
  body:not(.dyslexia-mode) main { / ...if not in our friendly mode! /
    background-image: url(https://res.cloudinary.com/jbowtie/image/upload/v1631662164/exclusive_paper_dyitgt.webp);
  }
}

In the existing design, we deliberately make the heading look like an imperfectly applied printed label by rotating it slightly. This is meant to evoke a playful or humanistic touch, and we often see designs adopt little touches like these for similar reasons.

However, this label-like appearance is a prime example of a decorative element that produces visual crowding. So even though it works well in a mobile context, we are going to need to remove this touch to provide a better experience for our dyslexic readers.

.dyslexia-mode h2 {
  border: none; border-bottom: thin grey solid;  / just keeping the bottom border for this element, to retain some separation /
  max-width: 100%; / standard width /
  transform: none; / do not rotate /
  background-color: inherit; / We no longer look like a label, so we don't require our own background /
  margin-bottom: 1em; padding-left:0; / some spacing adjustments /
}

Zebra striping has long been used when displaying tabular data, but research by Jessica Enders shows that the benefits are not necessarily as clear as I thought, and I didn’t find any dyslexia-specific research on the subject.

What I did find was a request from my dyslexic reader to implement zebra striping for tables and lists! Once again, real user feedback is invaluable.

I chose to restrict this to the main content, to avoid having to revisit the design of the site navigation. We don’t actually have any tables in our example, but the CSS changes would be quite similar.

.dyslexia-mode main li:nth-of-type(odd) {
    background-color: palegoldenrod;
}
Toggling Our New Mode

Now that we have a dyslexia-friendly design, we need to decide whether to make it the default, or something that is chosen by the user.

When retrofitting an existing site, as in this example, we’ll probably opt for a mode, to reduce the impact of changes on existing users.

In building a new site or refreshing a design, we should consider which changes we can make the default, for the benefit of all users. As with any other design work, you’re balancing the needs of multiple audiences, branding constraints, and tensions with other design goals such as evoking specific moods or keeping certain information “above the fold”.

Switching between modes is accomplished by toggling the class on the body element. Here we do it with a toggle button and some JavaScript, using localStorage to persist the change across visits and pages. This could be set and stored as part of a user profile.

    // toggle dyslexia support
    const isPressed = window.localStorage.getItem('dyslexic') === 'true';
    if(isPressed) {
        document.body.classList.add('dyslexia-mode');
    }
    // set the button to pressed if appropriate
    const toggle = document.getElementById('dyslexia-toggle');
    if(isPressed) {
        toggle.setAttribute('aria-pressed', 'true');
    }
    // toggle dyslexia support
    toggle.addEventListener('click', (e) => {
        let pressed = e.target.getAttribute('aria-pressed') === 'true';
        e.target.setAttribute('aria-pressed', String(!pressed));
        document.body.classList.toggle('dyslexia-mode');
        window.localStorage.setItem('dyslexic', String(!pressed));
    });

See the Pen Dyslexia-friendly mode added by John C Barstow.

Conclusion

The separation of content and presentation that CSS gives us always comes in handy when we need to adapt designs to better serve different communities.

Building on the solid foundations of a design that embraces accessibility guidelines, we’ve learned to extend our design to improve the experience for dyslexic readers. There are other audiences that could benefit from this kind of focused design work, and I hope this inspires you to seek them out and share your experience.

This design was tested with a small and possibly unrepresentative sample size. If you or someone you know has dyslexia, your feedback in the comments below about what does or doesn’t work would be very welcome and helpful!

Additional References

23 Nov 2021 | 11:00 pm

Smashing Workshops: Winter 2021

For many of us, personal workspace can feel quite comfortable and convenient, but nobody really wants to sign up for another full day of focused screen time. That’s why we break our online Smashing workshops down into 2.5h-sessions — with one session per day. This way, you always have enough time take it all in, try things out, rewatch a session or raise questions between sessions.

We’re super thrilled to announce the full program of workshops for the next months to come:

Creating and Maintaining Successful Design Systems
Brad Frost
5 sessions Nov 30 – Dec 14 workflow
Dynamic CSS Masterclass
Lea Verou
4 sessions Nov 29 – Dec 14 css
Design Management Masterclass
Yury Vetrov
5 sessions Dec 1–15 ux
Early birds!
Designing The Perfect Navigation
Vitaly Friedman
2 sessions Dec 2–3 ux
Early birds!
Accessible Front-End Patterns Masterclass
Carie Fisher
5 sessions Jan 20 – Feb 3 dev
Early birds!
New Adventures In Front-End, 2022 Edition
Vitaly Friedman
5 sessions Feb 3–17 dev
Early birds!
Front-End Testing Masterclass
Gleb Bahmutov
4 sessions Feb 8–16 dev
Early birds!
Ethical Design Masterclass
Trine Falbe
5 sessions 1–15 March ux
10× Tickets Bundle
Save $1250 off the price.
10 tickets No expiry Smashing!

Our online workshops take place live and span multiple days across weeks. In every session, there is always enough time to bring up your questions or just get a cup of tea. We don’t rush through the content, but instead, try to create a welcoming, friendly and inclusive environment for everyone to have time to think, discuss and get feedback.

There are plenty of things to expect from a Smashing workshop, but the most important one is focusing on practical examples and techniques. The workshops aren’t talks; they are interactive, with live conversations with attendees, sometimes with challenges, homework and team work. Of course, you get all workshop materials and video recordings as well, so if you miss a session you can re-watch it the same day.

Jump to all workshops →
SmashingConf San Francisco 2022

Yes, it’s official! Next year, we’ll be organizing a SmashingConf in each of these cities: San Francisco, Freiburg, New York and Austin! Alongside in-house workshops, the first speakers have already been announced with talks by experts on accessibility, front-end, design systems, performance and interface design.

We’d love to meet you in person on March 28–31, 2022, at the waterfront next to the iconic Golden Gate Bridge. There will be two days of talks, a single track, two workshop days, and loads of side events (all included in your ticket). For both the talks and the workshops, we have a good range of topics, varying from Figma to Web Performance and from SVG Animation to CSS Custom Properties. Jump to all speakers and topics →

A friendly, inclusive conference for designers and developers. Let’s jazz together!
To the tickets →
The Next Smashing Conference In A City Nearby

Great conferences are all about learning new skills and making new connections. That’s why we’ve set up a couple of new adventures for 2022 — practical sessions, new formats, new lightning talks, evening sessions and genuine, interesting conversations — with a dash of friendly networking!

Austin, USA

We are so excited to be bringing SmashingConf to Austin again on June 27–30, 2022. We’ll be exploring how new web technologies and emerging front-end/UX techniques can make us all better designers and developers. More details will be announced very soon — make sure to subscribe to the SmashingConf newsletter to be one of the first ones to know! 🌮

Freiburg, Germany

We will be returning to our hometown for SmashingConf Freiburg on the 5-7 September 2022. We pour our hearts into creating friendly, inclusive events that are focused on real-world problems and solutions. Our focus is on front-end and UX, but we cover all things web — be it UI design or machine learning. The Freiburg edition is, of course, no exception! 🥨

New York, USA

Each and every one of our Smashing conferences is a friendly, inclusive event for people who care about their work. No fluff, no fillers, no multi-track experience — just actionable insights applicable to your work right away. Join us for SmashingConf NYC in October 2022 — an event that is always quite a popular one! Follow @smashingconf on Twitter to get notified once we spill the beans on the Who, When and Where! ✨

Thank You!

A sincere thank you for your kind, ongoing support and generosity — for being smashing, now and ever. We’d be honored to welcome you.

23 Nov 2021 | 5:00 am

Improving The Performance Of Wix Websites (Case Study)

A website’s performance can make or break its success, yet in August 2020, despite many improvements we had previously made, such as implementing Server-Side Rendering (SSR), the ratio of Wix websites with good Google Core Web Vitals (CWV) scores was only 4%. It was at this point that we realized we needed to make a significant change in our approach towards performance, and that we must embrace performance as part of our culture.

Implementing this change enabled us to take major steps such as updating our infrastructure along with completely rewriting our core functionality from the ground up. We deployed these enhancements gradually over time to ensure that our users didn’t experience any disruptions, but instead only a consistent improvement of their site speed.

Since implementing these changes, we have seen a dramatic improvement in the performance of websites built and hosted on our platform. In particular, the worldwide ratio of Wix websites that receive a good (green) CWV score has increased from 4% to over 33%, which means an increase of 737%. We also expect this upwards trend to continue as we roll out additional improvements to our platform.

You can see the impact of these efforts in the Core Web Vitals Technology Report from Google Chrome User Experience Report (CrUX) / HTTP Archive:

These performance improvements provide a lot of value to our users because sites that have good Google CWV scores are eligible for the maximum performance ranking boost in the Google search results (SERP). They also likely have increased conversion rates and lower bounce rates due to the improved visitor experience.

Now, let’s take a deeper look into the actions and processes we put in place in order to achieve these significant results.

The Wix Challenge

Let’s begin by describing who we are, what are our use-cases, and our challenges.

Wix is a SaaS platform providing products and services for any type of user to create an online presence. This includes building websites, hosting websites, managing campaigns, SEO, analytics, CRM, and much more. It was founded in 2006 and has since grown to have over 210 million users in 190 countries, and hosts over five million domains. In addition to content websites, Wix also supports e-commerce, blogs, forums, bookings and events, and membership and authentication. And Wix has its own app store with apps and themes for restaurants, fitness, hotels, and much more. To support all this, we have over 5,000 employees spread around the globe.

This high rate of growth, coupled with the current scale and diversity of offerings presents a huge challenge when setting out to improve performance. It’s one thing to identify bottlenecks and implement optimizations for a specific website or a few similar websites, and quite another when dealing with many millions of websites, having such a wide variety of functionality, and an almost total freedom of design.

As a result, we cannot optimize for a specific layout or set of features that are known in advance. Instead, we have to accommodate all of this variability, mostly on-demand. On the positive side, since there are so many users and websites on Wix, improvements that we make benefit millions of websites, and can have a positive impact on the Web as a whole.

There are more challenges for us in addition to scale and diversity:

  • Retaining existing design and behavior
    A key requirement we set for ourselves was to improve the performance of all existing websites built on Wix without altering any aspect of their look and feel. So essentially, they need to continue to look and work exactly the same, only operate faster.
  • Maintaining development velocity
    Improving performance requires a significant amount of resources and effort. And the last thing we want is to negatively impact our developers' momentum, or our ability to release new features at a high rate. So once a certain level of performance is achieved, we want to be able to preserve it without being constantly required to invest additional effort, or slow down the development process. In other words, we needed to find a way to automate the process of preventing performance degradations.
  • Education
    In order to create change across our entire organization, we needed to get all the relevant employees, partners, and even customers up to speed about performance quickly and efficiently. This required a lot of planning and forethought, and quite a bit of trial and error.
Creating A Performance Culture

Initially, at Wix, performance was a task assigned to a relatively small dedicated group within the company. This team was tasked with identifying and addressing specific performance bottlenecks, while others throughout the organization were only brought in on a case-by-case basis. While some noticeable progress was made, it was challenging to implement significant changes just for the sake of speed.

This was because the amount of effort required often exceeded the capacity of the performance team, and also because ongoing work on various features and capabilities often got in the way. Another limiting factor was the lack of data and insight into exactly what the bottlenecks were so that we could know exactly where to focus our efforts for maximum effect.

About two years ago, we came to the conclusion that we cannot continue with this approach. That in order to provide the level of performance that our users require and expect we need to operate at the organizational level. And that if we do not provide this level of performance it will be detrimental to our business and future success. There were several catalysts for this understanding, some due to changes in the Web ecosystem in general, and others to our own market segment in particular:

  • Changes in device landscape
    Six years ago, over 70% of sessions for Wix websites originated from desktops, with under 30% coming from mobile devices. Since then the situation has flipped, and now over 70% of sessions originate on mobile. While mobile devices have come a long way in terms of network and CPU speed, many of them are still significantly underpowered when compared to desktops, especially in countries where mobile connectivity is still poor. As a result, unless performance improves, many visitors experience a decline in the quality of experience they receive over time.
  • Customer expectations
    Over the past few years, we’ve seen a significant shift in customer expectations regarding performance. Thanks to activities by Google and others, website owners now understand that having good loading speed is a major factor in the success of their sites. As a result, customers prefer platforms that provide good performance — and avoid or leave those that don’t.
  • Google search ranking
    Back in 2018 Google announced that sites with especially slow pages on mobile would be penalized. But starting in 2021, Google shifted its approach to instead boost the ranking of mobile sites that have good performance. This has increased the motivation of site owners and SEOs to use platforms that can create fast sites.
  • Heavier websites
    As the demand for faster websites increases, so does the expectation that websites provide a richer and more engaging experience. This includes features like videos and animations, sophisticated interactions, and greater customization. As websites become heavier and more complex, the task of maintaining performance becomes ever more challenging.
  • Better tooling and metrics standardization
    Measuring website performance used to be challenging and required specific expertise. But in recent years the ability to gauge the speed and responsiveness of websites has improved significantly and has become much simpler, thanks to tools like Google Lighthouse and PageSpeed Insights. Moreover, the industry has primarily standardized on Google’s Core Web Vitals (CWV) performance metrics, and monitoring them is now integrated into services such as the Google Search Console.

These changes dramatically shifted our perception of website performance from being just a part of our offerings to become an imperative company focus and a strategic priority. And that in order to achieve this strategy implementing a culture of performance throughout the organization is a must. In order to accomplish this, we took a two-pronged approach. First, at an “all hands” company update, our CEO announced that going forward ensuring good performance for websites built on our platform will be a strategic priority for the company as a whole. And that the various units within the company will be measured on their ability to deliver on this goal.

At the same time, the performance team underwent a huge transformation in order to support the company-wide prioritization of performance. It went from working on specific speed enhancements to interfacing with all levels of the organization, in order to support their performance efforts. The first task was providing education on what website performance actually means, and how it can be measured. And once the teams started working off of the knowledge, it meant organizing performance-focused design and code reviews, training and education, plus providing tools and assets to support these ongoing efforts.

To this end, the team built on the expertise that it had already gained while working on specific performance projects. And it also engaged with the performance community as a whole, for example by attending conferences, bringing in domain experts, and studying up on modern architectures such as the Jamstack.

Measuring And Monitoring

Peter Drucker, one of the best-known management consultants, famously stated:

“If you can’t measure it, you can’t improve it.”

This statement is true for management, and it’s undoubtedly true for website performance.

But which metrics should be measured in order to determine website performance? Over the years many metrics have been proposed and used, which made it difficult to compare results taken from different tools. In other words, the field lacked standardization. This changed approximately two years ago when Google introduced three primary metrics for measuring website performance, known collectively as Google Core Web Vitals (CWV).

The three metrics are:

  1. LCP: Largest Contentful Paint (measures visibility)
  2. FID: First Input Delay (measures response time)
  3. CLS: Cumulative Layout Shift (measures visual stability)

CWV have enabled the industry to focus on a small number of metrics that cover the main aspects of the website loading experience. And the fact that Google is now using CWV as a search ranking signal provides additional motivation for people to improve them.

Recommended Reading: An In-Depth Guide To Measuring Core Web Vitals” by Barry Pollard

At Wix, we focus on CWV when analyzing field data, but also use lab measurements during the development process. In particular, lab tests are critical for implementing performance budgets in order to prevent performance degradations. The best implementations of performance budgets integrate their enforcement into the CI/CD process, so they are applied automatically, and prevent deployment to production when a regression is detected. When such a regression does occur it breaks the build, forcing the team to fix it before deployment can proceed.

There are various performance budgeting products and open-source tools available, but we decided to create our own custom budgeting service called Perfer. This is because we operate at a much larger scale than most web development operations, and at any given moment hundreds of different components are being developed at Wix and are used in thousands of different combinations in millions of different websites.

This requires the ability to test a very large number of configurations. Moreover, in order to avoid breaking builds with random fluctuations, tests that measure performance metrics or scores are run multiple times and an aggregate of the results is used for the budget. In order to accommodate such a high number of test runs without negatively impacting build time, Perfer executes the performance measurements in parallel on a cluster of dedicated servers called WatchTower. Currently, WatchTower is able to execute up to 1,000 Lighthouse tests per minute.

After deployment performance data is collected anonymously from all Wix sessions in the field. This is especially important in our case because the huge variety of Wix websites makes it effectively impossible to test all relevant configurations and scenarios “in the lab.” By collecting and analyzing RUM data, we ensure that we have the best possible insight into the experiences of actual visitors to the websites. If we identify that a certain deployment degrades performance and harms that experience, even though this degradation was not identified by our lab tests, we can quickly roll it back.

Another advantage of field measurements is that they match the approach taken by Google in order to collect performance data into the CrUX database. Since it is the CrUX data that is used as an input for Google’s performance ranking signal, utilizing the same approach for performance analysis is very important.

All Wix sessions contain custom instrumentation code that gathers performance metrics and transmits this information anonymously back to our telemetry servers. In addition to the three CWV, this code also reports Time To First Byte (TTFB), First Contentful Paint (FCP), Total Blocking Time (TBT), and Time To Interactive (TTI), and also low-level metrics such as DNS lookup time and SSL handshake time. Collecting all this information makes it possible for us to not only quickly identify performance issues in production, but also to analyze the root causes of such issues. For example, we can determine if an issue was caused by changes in our own software by the changes in our infrastructure configuration, or even by issues affecting third-party services that we utilize (such as CDNs).

Upgrading Our Services And Infrastructure

Back when I joined Wix seven years ago, we only had a single data center (along with a fallback data center) in the USA which was used to serve users from all around the world. Since then we’ve expanded the number of data centers significantly, and have multiple such centers spread around the globe. This ensures that wherever our users connect from, they’ll be serviced both quickly and reliably. In addition, we use CDNs from multiple providers to ensure rapid content delivery regardless of location. This is especially important given that we now have users in 190 countries.

In order to make the best possible use of this enhanced infrastructure, we completely redesigned and rewrote significant portions of our front-end code. The goal was to shift as much of the computation as possible off of the browsers and onto fast servers. This is especially beneficial in the case of mobile devices, which are often less powerful and slower. In addition, this significantly reduced the amount of JavaScript code that needs to be downloaded by the browser.

Reducing JavaScript size almost always benefits performance because it decreases the overhead of the actual download as well as parsing and execution. Our measurements showed a direct correlation between the JavaScript size reduction and performance improvements:

Another benefit of moving computations from browsers to servers is that the results of these computations can often be cached and reused between sessions even for unrelated visitors, thus reducing per-session execution time dramatically. In particular, when a visitor navigates to a Wix site for the first time, the HTML of the landing page is generated on the server by Server-Side Rendering (SSR) and the resulting HTML can then be propagated to a CDN.

Navigations to the same site — even by unrelated visitors — can then be served directly from the CDN, without even accessing our servers. If this workflow sounds familiar that’s because it’s essentially the same as the on-demand mechanism provided by some advanced Jamstack services.

Note: “On-demand” means that instead of Static Site Generation performed at build time, the HTML is generated in response to the first visitor request, and propagated to a CDN at runtime.

Similarly to Jamstack, client-side code can enhance the user interface, making it more dynamic by invoking backend services using APIs. The results of some of these APIs are also cached in a CDN as appropriate. For example, in the case of a shopping cart checkout icon, the HTML for the button is generated on the server, but the actual number of items in the cart is determined on the client-side and then rendered into that icon. This way, the page HTML can be cached even though each visitor is able to see a different item count value. If the HTML of the page does need to change, for example, if the site owner publishes a new version, then the copy in the CDN is immediately purged.

In order to reduce the impact of computations on end-point devices, we moved business logic that does need to run in the browsers into Web Workers. For example, business logic that is invoked in response to user interactions. The code that runs in the browser’s main thread is mostly dedicated to the actual rendering operations. Because Web Workers execute their JavaScript code off of the main thread, they don’t block event handling, enabling the browser to quickly respond to user interactions and other events.

Examples of code that runs in Web Workers include the business logic of various vertical solutions such as e-commerce and bookings. Sending requests to backend services is mostly done from Web Workers, and the responses are parsed, stored and managed in the Web Workers as well. As a result, using Web Workers can reduce blocking and improve the FID metric significantly, providing better responsiveness in general. In lab measurements, this improved TBT measurements.

Enhanced Media Delivery

Modern websites often provide a richer user experience by downloading and presenting much more media resources, such as images and videos, than ever before. Over the past decade the median amount of bytes of images downloaded by websites, according to the Google CrUX database, has increased more than eightfold!

This is more than the median improvement in network speeds during the same period, which results in slower loading times. Additionally, our RUM data (field measurements) shows that for almost ¾ of Wix sessions the LCP element is an image. All of this highlights the need to deliver images to the browsers as efficiently as possible and to quickly display the images that are in a webpage’s initially visible viewport area.

At the same time, it is crucial to deliver the highest quality of images possible in order to provide an engaging and delightful user experience. This means that improving performance by noticeably degrading visual experience is almost always out of the question. The performance enhancements we implement need to preserve the original quality of images used, unless explicitly specified otherwise by the user.

One technique for improving media-related performance is optimizing the delivery process. This means downloading required media resources as quickly as possible. In order to achieve this for Wix websites, we use a CDN to deliver the media content, as we do with other resources such as the HTML itself. And by specifying a lengthy caching duration in the HTTP response header, we allow images to be cached by browsers as well. This can improve the loading speed for repeat visits to the same page significantly by completely avoiding downloading the images over the network again.

Another technique for improving performance is to deliver the required image information more efficiently by reducing the number of bytes that need to be downloaded while preserving the desired image quality. One method to achieve this is to use a modern image format such as WebP. Images encoded as WebP are generally 25% to 35% smaller than equivalent images encoded as PNG or JPG. Images uploaded to Wix are automatically converted to WebP before being delivered to browsers that support this format.

Very often images need to be resized, cropped, or otherwise manipulated when displayed within a webpage. This manipulation can be performed inside the browser using CSS, but this usually means that more data needs to be downloaded than is actually used. For example, all the pixels of an image that have been cropped out aren’t actually needed but are still delivered. We also take into account viewport size and resolution, and display pixel depth, to optimize the image size. For Wix sites, we perform these manipulations on the server-side before the images are downloaded, this way we can ensure that only the pixels that are actually required are transmitted over the network. On the servers, we employ AI and ML models to generate resized images at the best quality possible.

Yet another technique that is used for reducing the amount of image data that needs to be downloaded upfront is lazy loading images. This means not loading images that are wholly outside the visible viewport until they are about to scroll in. Deferring image download in this way, and even avoiding it completely (if a visitor never scrolls to that part of the page), reduces network contention for resources that are already required as soon as the page loads, such as an LCP image. Wix websites automatically utilize lazy loading for images, and for various other types of resources as well.

Looking Forward

Over the past two years, we have deployed numerous enhancements to our platform intended to improve performance. The result of all these enhancements is a dramatic increase in the percentage of Wix websites that get a good score for all three CWVs compared to a year ago. But performance is a journey, not a destination, and we still have many more action items and future plans for improving websites’ speed. To that end, we are investigating new browser capabilities as well as additional changes to our own infrastructure. The performance budgets and monitoring that we have implemented provide safeguards that these changes provide actual benefits.

New media formats are being introduced that have the potential to reduce download sizes even more while retaining image quality. We are currently investigating AVIF, which looks to be especially promising for photographic images that can use lossy compression. In such scenarios, AVIF can provide significantly reduced download sizes even compared to WebP, while retaining image quality. AVIF also supports progressive rendering which may improve perceived performance and user experience, especially on slower connections, but currently won’t provide any benefits for CWV.

Another promising browser innovation that we are researching is the content-visibility CSS property. This property enables the browser to skip the effort of rendering an HTML element until it’s actually needed. In particular, when content-visibility:auto setting is applied to an element that is off-screen its descendants are not rendered. This enables the browser to skip most of the rendering work, such as styling and layout of the element’s subtree.

This is especially desirable for many Wix pages that tend to be lengthy and content-rich. In particular, Wix’s new EditorX responsive sites editor support sophisticated grid and flexbox layouts that can be expensive for the browser to render, so that avoiding unnecessary rendering operations is especially desirable. Unfortunately, this property is currently only supported in Chromium-based browsers. Also, it’s challenging to implement this functionality in such a way that no Wix website is ever adversely affected in terms of its visual appearance or behavior.

Priority Hints is an upcoming browser feature that we are also investigating, which promises to improve performance by providing greater control over when and how browsers download resources. This feature will inform browsers about which resources are more urgent and should be downloaded ahead of other resources. For example, a foreground image could be assigned a higher priority than a background image since it’s more likely to contain significant content. On the other hand, if applied incorrectly, priority hints can actually degrade download speed, and hence also CWV scores. Priority hints are currently undergoing Origin Trial in Chrome.

In addition to enhancing Wix’s own infrastructure, we’re also working on providing better tooling for our users so that they can design and implement faster websites. Since Wix is highly customizable, users have the freedom and flexibility to create both fast and slow websites on our platform, depending on the decisions they make while building these sites. Our goal is to inform users about the performance of their decisions so that they can make appropriate choices. This is similar to the SEO Wiz tool that we already provide.

Summary

Implementing a performance culture at Wix enabled us to apply performance enhancements to almost every part of our technological stack — from infrastructure to software architecture and media formats. While some of these enhancements have had a greater impact than others, it’s the cumulative effect that provides the overall benefits. And these benefits aren’t just measurable at a large scale; they’re also apparent to our users, thanks to tools like WebPageTest and Google PageSpeed Insights and actual feedback that they receive from their own users.

The feedback we ourselves receive, from our users and the industry at large, and the tangible benefits we experience, drive us forward to continue improving our speed. The performance culture that we’ve implemented is here to stay.

Related Resources

22 Nov 2021 | 11:30 pm