This blog post is a part of The Future of Design Blogger Competition organized by CGTrader. It’s a longer and more technology-based post than I usually write, having been written in collaboration with my geeky husband.

Two words: Artificial Intelligence. I feel that the future of design, image editing, and graphics will be augmented by AI in ways we can’t fully imagine now. I shall attempt to give my best guess at the direction the creative process will take, and keep the timespan to advancements that are likely to happen within my lifetime. Few people seem to realise just how rapidly technology is advancing. Nobody can predict future timelines with any certainty, and those who try are usually doomed to fail, at least in their estimates of precisely when things will happen. We can often only be sure that they will. AI is one of those things.

The future of Smart Image Editing

As has always been the case, way more will possible in the future than today. Advancement in the software that I use (Adobe Photoshop), has meant that the clone tool has given way to the healing brush (where an unwanted part of an image is replaced by a more useful part, copying tonality, but blending the edges of the copied area into the surrounding image). Then came the spot healing brush and content-aware fill tool, each progressively smarter, but all using image detail from within the image that is being worked on.

So at the moment, image data is simply copied from one part of an image, manipulated, and moved to another part. This is where things get interesting. As AI improves, and computing power increases, image editing software will be able to trawl the internet for image data it can add, and make its best guess based on hundreds of thousands – even millions – of images. Image data can even be generated by combining information from many images. Got an unwanted person spoiling your lovely view of Big Ben, the Eiffel Tower, or the Taj Mahal? Simply ask your computer to remove them and fill in the missing image pixels using images from the internet. Famous landmarks will be especially suited to this, as there are already thousands of photos of them online, and very likely a few shot from near enough the exact same spot as the photo you took.

There is already imaging software that sorts thousands of images of frequently photographed famous monuments and builds a map of the angles each photo was taken from.

Cooperative Machines

Computers are becoming better at understanding and executing commands given in to them in a natural way, such as being told to do something using everyday speech. Soon, artists will be able to give instructions like ‘make the picture a little more sunny’ or ‘get rid of that ugly fire extinguisher in the corner’ and the image editing program will be able to do as it is asked.

Right now, computers are able to learn the style of an artist, and transfer that onto another drawing or photograph. For example, the phone app Prisma can make a picture you took on your phone look like a painting by Van Gough or Picasso.

Pretty soon, computers will be learning stylistic themes. So, the best practice in design – the use of white space on a page, readability, typography, professionalism, etc. Think of the Microsoft Office paperclip assistant from back in the day that would try to help with your creation of documents. If you started typing ‘Dear Sir or Madam’, it would pop up and say ‘it looks like you’re writing a letter –can I help you with that? We are many years on from that now, and shortly, your computer will be able to read through the poster/leaflet/website that you are designing, work out what the key content is and infer who the target audience is. It will then suggest ways to improve it based on the thousands of similar posters/leaflets/websites that it has seen online.

Also, ranking content is a simple task for machines to do (based on data such as number of clicks, likes, shares, etc.). The Grid and Firedrop are early attempts at AI-based web design programs. This current generation does not take the meaning of the content into account, though, and therefore makes errors that a human would not. Getting AI to truly understand meaning, rather than just copy styles, is something Google is working on with its research into thought vectors.

Neural Networks

Neural networks are a bit of a black box when it comes to understanding exactly how they work – and even explain in simple terms is beyond the scope of this blog. Suffice to say, they perform exceptionally well when given a specific, well-defined task. In fact, a few years ago, they overtook the best humans’ visual abilities (for example, in handwriting recognition and image classification), and are currently better than humans at a whole host of specific tasks. Being a bit of a geek, Richard has written a bit about them in his blog  Deep Learning: Creating graphics from computer vision.

Here are a few neural networks that have relevance to imaging/design:

Deepwarp – a neural network that assesses whether a person is looking at the camera or not, and if not, corrects their gaze so they are looking straight into the lens. Very useful for group shots, as someone is always blinking or looking the wrong way.

FaceApp – a phone app that can take a picture of a face and make it a smile, smooth out lines in the skin, etc.

Iphone 7 portrait mode – not technically a neural network, as it uses the difference in viewpoint from the phone’s two camera’s to create a depth map of the image and blur the background to isolate the person. I’m sure it won’t be long before a neural network will be used to create similar results from a single image.

Selfie focal length manipulator – one of the (many) reasons that selfies do not look as good as photographs taken professionally is the distance they are taken from, which is arm’s length. More flattering photographs are usually taken from further away, using a longer zoom lens. This one is best explained by the YouTube video Algorimic Beautification of Selfies by Károly Zsolnai-Fehér of 2 Minute Papers.

Future improvements

A semi-intelligent approach that is being taken to improve the quality of the output of AI is Generative Adversarial Neural Networks – now I know they sound complicated, but it’s really quite simple. One network generates an image, and a second network compares that image to real photos or human created designs, etc, and spots the differences, telling the first network where it could improve. This way, the first network will – over time – generate better, more realistic images that eventually will be indistinguishable from photographs.

Another constant that ensures technological progress is Moore’s Law (which roughly states that computing power doubles every two years) and the determination of the tech industry to keep up with it. This growth means that your phone currently has more processing power than the room-sized computers NASA used to put a man on the moon. Even before sufficient personal computing power (PC, tablet, mobile) becomes readily available, a lot of software can be cloud-based, taking advantage of supercomputers.

Marketplaces

Marketplaces are everywhere in the form of online stores. Instead of being designed by the end user, websites are now created from a content management system such as WordPress, with themes available to purchase on an online marketplace. Games are the same – the Unreal Engine marketplace has all the assets (characters, textures, level designs, etc) needed to create a game. Stock photo libraries have been around for a very long time. Soon Neural Networks and AI assistants will be purchasable, too.

Once you have your bot/AI, you could buy an add-on for it that includes all the knowledge gained from hundreds of hours of training to imitate an artist or designer’s personal style. To take it one step further, that an artist/designer could receive royalties whenever the bot is used for commercial applications. This would be similar to how photographers get paid for the use of their photos from Image libraries such as Getty or Shutterstock. I think this would be a fantastic way to ensure that humans are still appreciated, and financially compensated, for the unique skills they possess.

What you would gain vs what you would lose

It’s possible that by making amazing art and design so quick and effortless to create, we may lose some of the sense of satisfaction that would have come from the time investment required to produce work that we are happy with. For example, I love vector art because it is so clean. Crisp lines go well with bold colours. I love the instant impact tracing an outline and block filling it makes. I love the ability to refine an image and add detail. It’s possible to stop whenever the work feels ‘finished’, so no amount of detail is ‘required’. Some pictures look best with tons of tiny finishing touches, others work well as simple minimalistic pieces. I don’t think that the satisfaction that I would get from creating something amazing would be diminished by using AI help to automate part of this process to speed it up.

What does the future hold?

In summary, three things will enable the future of design.

Collaboration.
The synergy between human and machine input in our future may be deeper than we think (especially if Elon Musk’s Nuralink achieves it’s goal of a Human/AI brain interface) – but the use of AI will be ubiquitous in years to come.

Ability.
The capability of the AI help will be beyond anything we can fathom now. This is due to the vast amount of information that is out on the internet that an AI can be ‘trained’ on. Software can sift through hundreds of thousands more images than a human could ever hope to, and be able to recognise individual features and quirks that a human never could.

Intelligence.
All the skill, aptitude, ability, etc. in the world have little use if they are not applied intelligently. The day will come when AI will be able to take the ideas of a human, suggest improvements and enable the creation of masterpieces.

Just as cameras becoming affordable in the 20th century allowed everyone to take their own pictures, and then smartphones allowed everyone to share those pictures, the future will democratise the creation of works of art and great design with the assistance of intelligent machines. This is the world we are headed for, and it’s coming sooner than you think.

Thank you for taking the time to read this post. Stay tuned for more updates!
signature