Reshaping 
Creativity

•  Trend Article

How creative leaders use high compute trends to power breakthroughs in their industries

Contents

• 3D

Demand For The Third Dimension

Read Chapter

• AI

Expanding The Limits Of Creativity

Read Chapter

• (n)K Resolution

The Imminent Need For Infinite Pixels

Read Chapter

Technology Trends Revolutionizing Creativity

The digital landscape in which most creative professionals work is evolving so quickly that it’s as if the technological tectonic plates under their feet are in constant motion. In areas as disparate as 3D, artificial intelligence, and infinite resolution, advancements are continuously redefining what’s possible – from fully interactive ray-traced 3D models rendered in real time to resolution independent texture models that accommodate any display requirements to virtual retail experiences that exist in the real world via augmented reality. At the same time, artists and designers are increasingly able to leverage new technological tools to power their breakthroughs: To work more efficiently, to streamline workflows, and to future-proof their creations.

 

Perhaps no technology has evolved so quickly and become so important to modern workflows than 3D rendering. After decades of computing evolution, processing power has reached a point where it’s now possible for creators to work in 3D in real time, shedding the need for previsualizations and lengthy render times. Digital craftsman and Z by HP Ambassador Alex Trochut expresses it this way: “I’ve gone from being on crutches to flying on a rocket.” To not need to wait for a render changes every aspect of a workflow.

 

And as creators embrace this tech, 3D is finding its way into every industry. David McGavran, CEO of Maxon, describes the most common applications for the company’s 3D suite. “Cinema 4D is used in motion graphics, game development, general design, live design for concerts and stage events, and visualization for architecture, engineering, scientific and medical applications.” Even that list isn’t exhaustive; applications that were once relegated to the realm of science fiction, like virtual reality and augmented reality, now depend on creators working with 3D tools on a daily basis.

 

To keep up with the pace of this change, today’s resolution isn’t good enough. Whether it’s a photo, video, or an object texture within a VR experience, creators increasingly need to future-proof their projects for what’s happening tomorrow. “Within Avid, we’ve been following a philosophy of resolution independence,” says Shailendra Mathur, Vice President of Architecture at Avid Technology. “That’s why we call it (n)K resolution. Any aspect ratio, any resolution. We’ve gone from SD to HD, to UHD, and now we’re at 8K. That trend is going to continue.” 

 

Likewise, artificial intelligence (AI) has been a hot-button topic for years – pundits have long predicted the rise of AI poses an existential threat to many traditional jobs, which can, in principle, now be done by machines. But the real future of AI is shaping up differently. AI is being baked into tools, becoming invisible assistants to creators.

 

“Increasingly, customers expect technology to understand their intent and for software to be much more intuitive as a result,” says Mark Linton, Vice President for Portfolio, Solutions & Marketing at Microsoft. 

 

AI is reshaping software, from content-aware fills in Photoshop to photogrammetry tools that can recreate photorealistic environments in 3D renders.

 

Through all of this, how creators interact with their tools remains an exciting area of innovation. Technology has a habit of becoming less apparent, and already we can see trends in which computers and their mouse-and-keyboard interfaces fade into the background, from artists using their phones instead of large monitors for color accuracy in a project to the slow evolution from keyboard to touch to augmented reality. “I can see a future where every screen is touch,” says visual artist, director and Z by HP Ambassador Shane Griffin.

 

In Reshaping Creativity we will investigate these topics and more.

3D: Demand for the Third Dimension

Few modern technologies are as pervasive as 3D. From 3D gaming and visual effects-heavy Hollywood blockbusters to manufacturing and fashion, 3D graphics can be found playing a pivotal role in a wide range of products, technologies, workflows and industries. Since 3D is about modeling and simulating objects in a digital environment, it’s increasingly used for visualization everywhere.

3D’s ubiquity is, in part, what’s fueling its growth. The global 3D animation market was valued at nearly $14 billion in 2018 and is expected to grow at a compound annual growth rate of 11 percent at least through 2025, according to Grand View Research.1

 

Some industries are in the throes of massive transformation. While those in manufacturing, for example, have used 3D for decades, other industries, like fashion, are just beginning to see the benefits of 3D across the entire workflow. Artists who have traditionally needed to wait hours or days for complex scenes to render can now create photorealistic visualizations in minutes. There are few businesses that can’t benefit from these efficiencies.

$14B

Market value of 3D animation

Real-Time Rendering Transforms Workflows

Traditionally, the costliest resource for creating high-quality visualizations was time – rendering time. “The availability of computational power has changed everything,” says Sébastien Deguy, Vice President of 3D and Immersive at Adobe. David McGavran, CEO of Maxon, elaborates: “The success of GPU renderers like Redshift have changed the time calculation exponentially for rendering amazingly detailed scenes in minutes which used to take hours or days.” Photorealistic renderings can be created at 24 or 30 frames per second with no discernable delay. These astounding advances in hardware performance are being paired with real-time graphic engines, which got their start in the computer gaming world. Unreal Engine and Unity are well-known as the engines driving the graphics across a range of games with hundreds of popular titles in between. But nothing about rendering engines are gaming specific. “This technology came from gaming, but it’s being applied everywhere,” says Ross McKegney, Director of Engineering for 3D and AR at Adobe. 

Hollywood is embracing these real-time rendering tools, and for good reason. In the 2000s, directors shot actors in greenscreen environments and surrounded them with rich 3D environments. But advancements during this time only allowed for rudimentary ability to pre-visualize the scenes that were being shot digitally. 

With modern real-time rendering, directors see photorealistic visualizations of the scene in the camera, giving them the ability to see what the finished film will look like even as it’s being shot. VFX studio The Mill2 has been combining real-time rendering with motion capture in innovative ways to allow clients to create fully realized content in real time. In the past, if you wanted a computer-generated creative to appear in a video spot, that might take months to script, shoot with a motion-captured human actor, and then render. Need to reshoot? Repeat the process. The Mill, however, motion captures an actor and renders it photorealistically as a fully-realized CG character as it’s recorded, reducing the workflow from months to hours. The CG character responds instantly to motion-captured action in front of the camera, so the director and clients can see exactly what they’re getting without weeks of post-production. 

“This technology came from gaming, but it’s being applied everywhere”

Ross McKegney

Director of Engineering, 3D and AR, Adobe

And then there’s the same studio’s Cyclops toolkit, which, when combined with Unreal Engine, allows a producer to insert a photorealistic rendered automobile into fast-paced road race footage. By shooting a placeholder vehicle covered in markers, it can be “skinned” by compositing any other vehicle over it in a convincingly undetectable way. 

Photogrammetry Combines the Real and the Virtual

Once you build 3D models, though, how do you create the world in which they live? Certainly, 3D environments can be built from scratch, but it’s incredibly expensive and time-consuming to create that content. Photogrammetry – the ability to reconstruct 3D models from photographs of real objects and scenes – is emerging as a powerful tool for automating the creation of 3D worlds. 

 

“What we want to do is capture enough of the real world so that we can represent all of the real world virtually,” says Marc Petit, General Manager of Unreal Engine at Epic Games. “We’re not going to scan all the rocks on the planet, right? That would be highly impractical. But we can scan enough rocks that using machine learning techniques, we can generate all the rocks that we want, which are going to be all different.” Last year, Epic Games acquired Quixel, a leader in photogrammetry, and has made tens of thousands of photogrammetry-based digital assets available to artists for free. 

 

This technique is used extensively in film, and in other applications as well, such as architectural renderings.

3D Goes Mainstream

The final element responsible for moving 3D into the mainstream is the emergence of Physically Based Rendering (PBR), which defines textures and surfaces in 3D models based on physical properties. “Before PBR, to render an image, you’d have a programmer actually write code to define how a scene would be rendered,” says McKegney. “It was a very technical job. With PBR you’re putting that in the hands of a designer who can create models by specifying physical properties. It’s much more efficient than writing custom code.” 

 

This one development has enabled countless businesses to build 3D imaging into their product development, design, and marketing workflows; photorealistic textures can be applied by designers as a part of the creative workflow.

Fashion brand Deckers designs their footwear using 3D modeling software, allowing for precision and efficiency during the manufacturing process. 

Chris Hillyer, Innovation Director at the fashion brand Deckers, is spearheading this kind of 3D transformation for the creation of footwear. “I’m just adapting technologies that are being used in other industries with great success,” he says. Under Hillyer’s leadership, Deckers has moved to a digital-first design philosophy, creating footwear in 3D modeling software and then striving to capitalize on the efficiencies that come from using that design through manufacturing, sales, and marketing. This workflow is only possible thanks to the intersection of computing power, real-time rendering, and shifting roles from specialized programmers to creative designers. 

 

Part of Hillyer’s challenge is moving away from the sluggish and error-prone workflow of years’ past, when footwear designs were created on paper and then sent to manufacturing partners in Asia. “We’d cross our fingers and hope they understood our vision. We’d hope they could interpret a side view picture of a shoe and magically make it three-dimensional,” Hillyer says. These days, that dynamic is turned upside down – starting with a 3D design, the manufacturer is now fully equipped for success. The 3D design gives precise information about the materials needed, and any changes update on the Bill of Materials automatically.

 

The efficiencies extend well beyond manufacturing. “If you have 17 colors of a given boot, you can make one of them and represent the rest of them in 3D, so you don’t have to ship thousands of samples around the world anymore,” says Hillyer.

 

When it comes to modeling real-world objects, especially in worlds like fashion and home design, accurately representing textures, fabrics, and other materials is critical. 3D developers are already meeting that challenge. CLO 3D,3 for example, is a 3D tool that lets designers create and visualize 3D versions of clothing – even seeing how it flows over the human form. A similar tool, Browzwear,4 works in conjunction with the Fabric Analyzer, a device that precisely measures the thickness, stretch and bend properties of fabric, which can be used by 3D design software to accurately model exactly how the fabric will behave on a real person.

VR Offers a whole New View

For decades, our interactions with computers have been flat – we’ve worked, played, and been entertained through a two-dimensional screen. Even when content has been depicted as three-dimensional, it was necessarily displayed on a flat screen. Those days are ending – the future is shaping up to be truly three-dimensional.

 

3D has begun to enter the mainstream for both consumers and commercial applications through virtual reality (VR), a stereoscopic experience that simulates a fully immersive depiction of a 3D environment. A variety of factors, including the development of GPUs sufficiently powerful to render real-time stereoscopic visuals and an explosion of commercially available VR headsets, are putting VR in reach of most users.

REACH, an Emblematic Group platform, lets users place real people into high-res 3D environments, like the one pictured above. 

That’s a dramatic shift from just a few years ago, when cutting edge creators who wanted to explore VR were forced to literally make their own hardware. Nonny de la Peña, Founder and CEO of Emblematic Group, recalls the complexity of creating Hunger in LA, a VR project she led in 2012. “Thank God, I don’t have to make physical gear anymore,” she says, recalling that she once had to source her own lenses, 3D print her own goggles and purchase tracking cameras in order to cobble together equipment to show her creation at the Sundance Film Festival. When she started it was a fertile time for the nascent VR industry, though; future Oculus founder Palmer Luckey was an intern in the University of Southern California lab where de la Peña was a research fellow.

“VR has this really interesting way of engaging with both parts of our brain - the part that makes us feel like we’re there and the part that knows we’re not there”

Nonny de la Peña

CEO and Founder of Emblematic Group

Now, creators can rely on a variety of commercially available hardware, and that’s helping drive VR forward. As a journalist, de la Peña thinks about VR in terms of its visceral impact on users. “There’s nothing like it. VR has this really interesting way of engaging with both parts of our brain – the part that makes us feel like we’re there and the part that knows we’re not there. That duality of presence, of being there and knowing you’re here at the same time, is why it’s such a powerful medium to work with.”

 

Certainly, immersive journalism like de la Peña’s is only one application of VR; even larger uses for VR include gaming, marketing, and even commerce. Andi Mastrosavas is Chief Product Officer at Insperience, a company that’s developing VR-powered retail experiences. She sees 2020’s global COVID-19 pandemic as helping to accelerate VR’s adoption, particularly in retail experience. “Right now, we know there are no real commerce VR experiences to speak of,” says Mastrosavas. “To use online video as an analogy, we’re pre-YouTube.” Mastrosavas says she used to imagine a timeline of five to seven years before people would routinely embrace retail experiences in VR at home. “I think that’s now been condensed to three years.”

 

Shailendra Mathur, Vice President of Architecture at Avid Technology, looks more broadly at VR’s role in entertainment, and sums up the imperative for VR this way: “I believe there’s going to be a point where we won’t be satisfied with entertainment until we can offer an alternate reality that matches how our senses work.

“There’s going to be a point where we won’t be satisfied with entertainment until we can offer an alternate reality that matches how our senses work.”

Shailendra Mathur

Vice President of Architecture, Avid Technology

Viewing through two eyes is the natural thing to do, and Stereoscopic VR takes us into the next level; it’s a wholly immersive experience.”

 

Getting there will have its challenges. Despite its potential, VR hasn’t instantly become a mainstream consumer technology. And despite advances in headset technology, VR hasn’t yet reached its potential. Concerns about cost still prohibit mass adoption. And in retail or social environments, the nature of a VR headset can feel isolating.

Adobe Aero enables users to view, build, and share immersive and interactive AR experiences.

XR is Really the Future

Moreover, VR may not have taken the world by storm because it’s only part of the story. The real future of 3D interaction is likely to be Augmented Reality (AR) and its close cousin, Mixed Reality. Manufacturing is one of many industries exploring applications for AR in tasks like safety, training, logistics, and maintenance.5 “I truly believe that AR will eventually be everywhere,” predicts Deguy. “Right now, VR is more representative of what the AR experience will eventually look like, after we solve the hardware problem about how to display it without hardware getting in people’s way.”

 

Says de la Peña, “I would use the term XR or extended reality instead of VR because when you create things spatially, you can explore them as either a virtual reality or an augmented reality experience. I think that the idea of the separation between those two technologies is going to go away.”

 

AR technologies already give designers the ability to place virtual objects in the user’s field of view, mixing it with real-life environments. Consider Adobe’s Project Aero,6 for example, which is now available to customers in early access. Objects designed in Photoshop can be exported to an AR app with a click – like sending a file to prepress – where clients and customers can use their phones to see the object appear in full 3D in whatever physical environments they’re standing in. It’s akin to instant 3D printing.

 

And by equipping computers with an understanding of the local environment mapped in 3D data, virtual and real objects can interact within AR headsets. Digital objects can be placed on a real tabletop, for example, and a headset can maintain the illusion that virtual objects exist in the same space as real ones. Using tracking devices and sensors, users can perceive physical objects differently in the virtual environment, and fully interact with them. This, in a nutshell, is Mixed Reality. When the computer can digest the real world as 3D metadata, there are few limits to how the real and virtual can interact.

 

That’s opened up some exciting new possibilities. Volumetric capture solutions like 4D Views’ Holosys7 can scan and render real-world objects into AR environments in real time, enabling the insertion of physically distant elements into an interactive video, or creating a “hologram” in which users carry on a two-way video conversation with someone as if they’re in the same room as perceived through AR goggles.

 

Spatial computing takes these concepts one step further. Over the last decade or two, interactions with computers have become more natural and human-centric. We’ve gone from arcane text input to mouse and keyboard control to gestures on a screen. Spatial computing allows gesture control – currently really only practical in VR – to happen in the 3D world, so users can interact with virtual interfaces and objects just by reaching out and touching them. And spatial computing won’t stay strapped in VR forever, either. Already, wearable devices like Litho8 allow users to interact with digital objects in the real world.

“Having designs in 3D allows brands to market things while they’re still being manufactured.”

Sarah Krasley

CEO of fashion brand Shimmy Technologies

Challenges and Opportunities for 3D

All of this potential comes at a cost. Traditionally, aspiring designers have found 3D design training difficult to find, which often required a degree in traditional design, followed by hands-on experience in industry. There was no obvious pathway to jump directly into 3D. “You had to learn from other people around you, and the people that were doing it at that point weren’t really doing an amazing job at it. It was really just a lot of trial and error,” recalls visual artist, director and Z by HP Ambassador Shane Griffin, who has directed videos for brands like Apple, Google, Microsoft, and Adidas. But access to 3D design tools and education is improving. 

 

Shimmy Technologies9  is a fashion technology company whose mission, in part, is to prepare garment workers in Bangladesh to make the transition to 3D. Despite the fact that these workers are not often computer literate, Shimmy helps them level up their 3D skills. “We built a little video game that up-skilled garment workers’ basic digital skills and transferred what those workers know about physically making garments on the factory floor to digitally constructing them in 3D models,” says Sarah Krasley, Shimmy’s CEO.

 

Shimmy did that by building an interface that let them take what they already knew about making clothing on a sewing machine and applying that knowledge to doing highly valuable digital 3D work. “This work would not have been possible without artificial intelligence,” Krasley says, explaining that the software communicates in the Bengali dialect the workers speak. The result? Many of these workers have advanced into roles that pay 250% of what they were earning before.

 

The changes aren’t just happening in Bangladesh. Real-time interactive 3D design is no longer an unusual career to which artists must migrate laterally from traditional design studies. Unity, for example, has released a pair of tools – Create with Code10 and Unity Teach11 – for classroom education. Unity Teach is a platform that supports educators who want to expand their skills teaching STEM topics, while Create with Code is a complete set of courseware for learning to program in Unity, including teacher modules, tests, and answer keys, and students can then apply for the Unity Certified Programmer Certificate. 

Even as 3D explodes into the mainstream, powered by better tools, more educational opportunities, and an ever-expanding universe of applications for 3D to be applied, the very success of the technology is starting to create some challenges. All the way back in 2004, a film was criticized for crossing into the Uncanny Valley - that point in visual fidelity where CG characters are so lifelike yet not lifelike enough that they trigger warnings in the back of our brain that there's something wrong with what we're seeing.

 

But even that is changing. Siren,12 a real-time digital woman created by a partnership of studios including Epic Games and CubicMotion, is arguably indistinguishable from the real person. Siren's face - including eye movements, facial expressions, and skin texture, elevate the state of the art and imply the Uncanny Valley is, in fact, thoroughly passable.

Implications

The field of 3D is crackling with innovation, because it is so incredibly pervasive, with artists and designers eagerly embracing it. 

 

Traditional 2D industries – even ones like fashion, which have traditional workflows that date back millennia – are transitioning to 3D. “3D affects the industry in so many ways,” says Krasley. “Being able to prototype digitally saves a lot of time. Having designs in 3D allows brands to market things while they’re still being manufactured. And if you have a 3D model that you’re using for marketing, you can use them to lay out how you’ll set this out in the store. There’s some CAD platforms that allow you to have a fashion show so you can see what this stuff will look like on a catwalk in motion.” And eventually, Krasley says, “these same models will be useful in the AR and VR experience for the end user shopping experience.”

 

But all those applications for 3D within any single industry suggests looming standards issues that must be resolved, sooner rather than later. Hillyer agrees: “I sit on a couple of different innovation boards within the 3D Retail Coalition. As an industry, we’re trying to standardize how materials are scanned and stored, along with all the associated data that goes along with describing a material.” To that end, Hillyer has been developing an online platform called Material Exchange,13 which can serve as a standard platform for companies across the fashion industry to securely scan and store the digital representations of materials. 

 

These kinds of standardization conversations are happening across industries that are embracing – and stumbling into the complexity of – adopting 3D. And just in time, too, because with the progression of real-time rendering, mixed reality, and the growth of 3D across every conceivable industry, it’s just a matter of time before everything in the real world will have a digital twin -- its own set of 3D metadata to be consumed by 3D software and display technology.

AI: 
Expanding the Limits of your Creativity

Humans are tool users, and we’ve long relied on tools to help make art. From painting with pigments made from animal fat and dried fruits to using a chisel to carve marble stones to creating virtual 3D environments with Unity or Unreal, we’ve always found new and innovative ways to express ourselves. One of the newest tools at our disposal is artificial intelligence. 

Modern Artificial Intelligence (AI) has become very good at performing specific tasks within a narrow range of knowledge – whether it’s driving a car, scheduling appointments, or composing a photograph. And it’s growing at a phenomenal rate. Gartner projects the AI industry will grow to more than $3.9 trillion by 2022,1 more than three times the size of the industry in 2018. 

A lot of that growth can be attributed to a veritable explosion of open source AI initiatives. From TensorFlow2 to Keras3 to Microsoft’s own open source platform Cognitive Toolkit,4 there are a wealth of frameworks that can be used to develop and teach neural networks using machine learning frameworks. The barrier to entry is dropping, and more companies are willing to experiment. 

 

AI is gaining support from business leaders as well. Studies show that as much as 73 percent of senior executives want to invest in AI,5 mirroring the improvements in hardware that are enabling AI to continue to grow. NVIDIA® has announced a $99 AI computer called the Jetson Nano,6 custom made for machine learning, while Apple recently touted its new A13 Bionic chip for the iPhone 11, designed in part to facilitate applications that rely on machine learning. 

 

All this has set the stage for AI blossoming as a creative partner across the spectrum of creative applications. As long ago as 2015, AI convincingly composed its own classical music,7 and today it can apply the style of an iconic artist to make its own art in the same genre, like Shubhang Desai’s AI that can render any picture in the visual style of another.8

Shifting Perceptions

AI isn’t turning out the way many of us expected. While science fiction has long depicted AI with a reputation for evil, real AI is proving to be more of a faithful assistant. Evie,9 Sally,10 and X.AI11 and are all AI-powered assistants that schedule meetings and manage calendars like a human assistant. Google Home, Amazon Alexa, and Apple’s Siri are all variations on an AI assistant theme. It’s no small phenomenon; Amazon has sold more than a 100 million devices with Alexa on board,12 and Siri has been built into roughly 1.4 billion smartphones (since it debuted in 2010).13

 

AI’s prominence has changed the very nature of many creative workflows. Using AI-powered photogrammetry, for example, artists can populate 3D environments with natural looking objects that have been built automatically, without needing to create them by hand. Explains Marc Petit, General Manager for Unreal Engine at Epic Games, “Our goal is to change the economics around content creation, using machine learning to generate more content, and then stylize those assets. That’s why you have much less human intervention, which [is] an expensive resource.” 

 

Changing the workflow also means expanding possibilities. Danny Lange, Vice President of AI and Machine Learning at Unity Technologies, says, “Today, an architect can have an AI simulate billions of layouts of a house. The architect can say, ‘give me the top 100 layouts that give the most sunlight in that house year-round.’ And then I can take those and pick one of them and elaborate on it and I’m going to put my touch on it.” Essentially, the computer made sure that the architect was starting with the best possible solution for a critical aspect of the design, and then the architect can innovate on top of that. 

 

But while AI is ably demonstrating its ability to free people from performing mundane tasks, it can do more, stepping up to the role of collaborator. “As a filmmaker and designer, I’m very interested in doing the work myself,” says designer, director and Z by HP Ambassador GMUNK. “But I like the idea of directing the AI to take from here and put it there.” A favorite example of this kind of human-AI collaboration? Style transfer. “My wife did a bunch of really cool acrylic paintings and [a client] style-transferred this library of paintings to landscapes,” says GMUNK. “So you’re sampling all these different textures and colors from the paintings and putting them on these landscapes to make these really surreal, dreamlike images. It’s really beautiful.” 

 

The AI approaches creative and technical problems in novel ways that humans might never envision. Modern AI relies on machine learning, in which computers teach themselves after being fed enormous amounts of information. Because the resulting data structures are so complex, it’s not clear how a computer arrives at the decision or recommendation that it does. “It’s the same as looking at a brain,” says Lange. “I can look at it, but I’m not sure I understand what’s going on in there. It’s a black box. We do what a psychologist would do; we start feeding the system examples, and then we see what it outputs and we get a sense of the way that it performs.” 

“AI sometimes makes decisions and does things that no human would do on their own. And that’s exciting.”

Jody MacDonald

Documentary Photographer and Z by HP Ambassador

Documentary photographer and Z by HP Ambassador Jody MacDonald finds AI intriguing for just this reason. “I’m not ready to start using it because I feel like a lot of what AI does still looks too fake. I can look at a photo and know if it’s been manipulated with some machine learning algorithms. But at the same time, AI sometimes makes decisions and does things that no human would do on their own. And that’s exciting.” 

 

Back in 2016, IBM’s Watson computer was tasked with helping to create the trailer for an AI-themed horror film. Watson was fed 100 horror film trailers and then used those examples to analyze the film. In the end, Watson recommended 10 scenes (about 6 minutes of video) for the trailer, which was then stitched together by human video editors. 

Welcoming AI in to the Creative Industry

It’s debatable – and has been extensively debated14 – whether elephants understand what they are doing when they paint. But it’s certain that there’s no intent behind what computers do (at least, not yet). Says Matthew Carse, Founder of Image Data Systems, “Medieval painters relied on skilled craftsmen to create ultramarine blue from lapis lazuli, grinding it into powder and adding pine resin, gum mastic and wax, working it over a period of days. Michelangelo’s Sistine Chapel would not have been possible without their work, but they received no credit. The genius lay in how to apply the material, not how to make it.” 

 

Computers are just modern-day versions of those skilled craftsmen. GMUNK does a lot of work with data that he often needs to visualize. “I might need to shoot a library of content that’s visualizing music on stage,” he says. “You photograph a bunch of different things, and then you have AI selectively edit the library of footage according to the music.” GMUNK, in other words, is the artist who captured the content, but he then delegates the time-consuming musical synchronization to an AI apprentice. 

Game balancing optimization within Unity Simulation allows for playtesting games before launch to help achieve better game balance efficiently and accurately.

“Whatever the music is playing, it’s sampling the library of footage to create edits that you would never have thought of,” says GMUNK. “That’s super interesting. But it’s just a tool.”

 

Meanwhile, Adobe’s Sensei15 – a rich framework of AI-powered graphics tools – is finding its way into the entire portfolio of Adobe products. Morph cut is a Sensei-powered tool that suggests editing effects for film editors using Adobe Premiere Pro, while Sensei lets Adobe Lightroom’s Face Aware Liquify help artists make more natural edits to human faces. And the list goes on – Sensei uses AI for faster and smarter editing, image search, and image sourcing. It’s the ultimate journeyman craftsman, helping the pro work faster and smarter. 

 

The power of AI is clearing the deck of mundane and repetitive tasks so that artists can concentrate on doing the work that requires the most brain power. This is true across the gamut of AI, machine learning, and the internet of things. These technologies let humans concentrate on the most intellectual aspects of their jobs. 

I think it’s super interesting to think about using a machine that understands me and my technique so well [that] it finds ways to help connect my ideas together, like I’m having a conversation with the app.

Alex Trochut

Digital Craftsman and Z by HP Ambassador

Digital craftsman and Z by HP Ambassador Alex Trochut creating in his studio

Says Lange, “Computers can be creative. There’s no doubt in my mind about that – but I don’t want that to be confused with art. I wouldn’t care about a painting generated by a computer because it doesn’t really matter to me what the computer does. But computers can help do a lot of the creative work that may probably be less valued by the audience, but which allow the artist and the creative individual to focus on the more high-value, high-impact aspects.” 

 

That dynamic is appealing to many artists. Says digital craftsman and Z by HP Ambassador Alex Trochut, “I think it’s super interesting to think about using a machine that understands me and my technique so well [that] it finds ways to help connect my ideas together, like I’m having a conversation with the app.” 

 

Nonetheless, as computers and automation inexorably move into more and more career fields, creative professionals have long felt insulated because what they do has been difficult to automate. But that’s starting to change. Some artists worry that AI is getting too good. “It’s a little scary,” says Trochut. “If an AI knows you so well, there’s this vulnerability. What keeps the AI from replicating your style? Then someone can just use an AI instead of your own work.”

Implications

Even as computers increasingly collaborate and assist humans in the creation of art, there’s little doubt to whom the credit should go. Not to the programmer – machine learning ensures that no human being can fully explain how an AI comes up with what it does. But at the same time, these AIs aren’t really demonstrating true creativity. 

 

No one would suggest that Sensei, for example, is meaningfully contributing to the creative process; it’s simply streamlining an artist’s workflow. Neither Photoshop’s content-aware fill nor the iPhone’s Deep Fusion (the iPhone 11 feature designed to improve the exposure of a scene that was painstakingly composed by a very human photographer) deserves equal billing with a photographer. Even as assistants and collaborators, they’re still just tools in the hands of creatives. That said, as the technology becomes increasingly sophisticated, will there come a point at which the AI is capable of unambiguous creativity, full partners with human artists? 

 

Lange is certain that creators and AI will work much more intimately in coming years. “They are going to be more partners than tools. Where we in the past used tools to fill in colors or find outlines or something very trivial, these new tools are going to be much more intellectual, because they’re driven by patterns that we humans are driven by as well. They are not going to be as powerful and creative as we are, but they may suggest stuff that you’re going to look at and say, wow, that was actually a good suggestion. And then we can build upon that.” 

 

Perhaps it’s just a matter of time before an AI gets its name in the credits of a film without it seeming like little more than a novelty.name in the credits of a film without it seeming like little more than a novelty.

(n)K Resolution:
The Imminent Need for Infinite Pixels

The history of modern computing is virtually synonymous with the race to push ever more pixels onto screens. The first IBM PC had no graphics display at all; its MDA (Monochrome Display Adapter) only showed text. In quick succession, PCs adopted the 320×200-pixel CGA (Color Graphics Adapter) and 640×350-pixel EGA (Enhanced Graphics Adapter) formats before landing on 640x480-pixel VGA graphics in 1987.

In more modern times, American television’s ancient NTSC standard of 525 blurry scan lines (or PAL’s 625 lines, found in most of the rest of the world) gave way to High Definition (HD) in 1998. This is around the time that computer and television standards merged, giving us a veritable parade of pixel densities. 720P (1280x720 pixels) was quickly supplanted by 1080P (1920x1080 pixels). Today, 4K (3840x2160 pixels, also known as Ultra HD) reigns, but 8K (7680x4320 pixels) waits in the wings. 

 

The moral is simple: More pixels have always been better. Modern techniques are enabling the creation of content with unlimited resolution, able to scale up to whatever display device exists. “We see a lot of industries moving from video playback to real-time 3D because it is inherently resolution independent,” says Marc Petit, General Manager for Unreal Engine at Epic Games. “When you generate the frames on demand, you can get that content at 2K, 4K or 8K – you just have to tell the renderer how many pixels to render.”

Image Resolution Needs Are Boundless

We’re entering an age in which the need for image resolution is boundless. From 4K to 8K to 32K to (n)K, our demand for more resolution in visual content remains insatiable. 

 

Resolution has myriad applications. High resolution capture allows the composition and framing to be changed in post, for example, even giving the impression of multiple camera angles. It ensures the content is futureproofed, able to accommodate future display formats. And resolution isn’t limited just to video capture; it’s integral to graphics and visual effects as well. Textures, for example, add another dimension to the needs of creators. Especially in real-time environments like games, objects might have multiple textures that need to be loaded in real time as the viewer moves through the environment, getting close enough to reveal limitations in visual quality. 

 

“Everyone is being asked to create more and more photorealistic content,” says Ross McKegney, Director of Engineering for 3D and AR at Adobe. “Historically, due to hardware limitations, we were down-sampling everything because you didn’t have enough memory on the GPU, you didn’t have enough compute capability. That’s increasingly not the case anymore, and that expectation is driving the need for higher quality geometry and higher resolution textures everywhere.” 

Virtual Reality Drives the Need for Resolution

These kinds of issues are manifesting themselves in virtual reality (VR), which is seeing significant growth in industrial applications. First-generation consumer tethered-VR systems which appeared on the market around 2016 displayed a total of 2160x1200 pixels spread across a 100-degree field of view.1 But humans can see at least 180 degrees, and sometimes as much as 210 degrees. Lacking all that peripheral vision, there’s no denying you’re wearing a headset that’s delivering a truncated view of the (virtual) world. For VR to be more immersive, resolution needs to increase along with field of view (FOV). 

 

There’s been some progress; the more recent HP Reverb VR headset, for example, includes a more expansive 114-degree field of view and a display resolution of 2160x2160 pixels per eye. It’s a dramatic improvement compared to 1080x1200 pixels per eye found in first-generation headsets like the HTC Vive or even a more recent standalone headset like the Oculus Quest. 

From 4K to 8K  to 32K to (n)K

But some simple math illustrates why there’s so much room for improvement to enhance VR realism. If you double a 100-degree FOV to 200 degrees – full human field of view – you need to quadruple the resolution from 2880x1700 to 5760x3400 pixels just to keep the same pixel density at that larger size. To drive a headset running that kind of resolution, you don’t just need new display technology, but also a display adapter, processor, and memory to move all those pixels at up to 90 frames per second, at a minimum. 

 

That’s just part of VR’s problem. The real world has virtually infinite resolution – everything stays razor sharp no matter how close you get to it. If VR designers are going to mimic the real world, they need to be able to simulate reality’s nearly infinite resolution, and that requires the ability to scale textures in real time. 

 

“Honestly, the goal of a real-time engine in the past wasn’t to be photoreal,” says McKegney. “The goal today, and especially going into the future, is to be completely photorealistic in real time, in VR and elsewhere.” Petit agrees: “Even if you’re creating a fantasy world it’s a world that does not exist. But photorealism is the benchmark.” 

 

Because it’s impossible to store the virtually unlimited resolution needed to accommodate viewing every object in a VR environment close up, the technique of only loading higher-resolution textures as they’re needed is a tried and true technological shortcut borrowed from the world of video games. In other words, (n)K resolution is, at least in part, about having a library of texture resolutions stored in software to deliver the fidelity needed in any given situation. 

Computational Photography

There was a time when most kinds of computational photography were something of a niche – but these days, computational techniques have gone mainstream. To make up for the lack of interchangeable lenses found in professional cameras, for example, smartphones increasingly rely on their powerful processors and computational techniques to take better pictures, putting this technology in the hands of millions of people. Panoramas, for example, are a rudimentary example of computational photography. As you pan a phone, it takes a series of photos and then “stitches” them together in real time, creating one long, seamless image. 

The HP Reverb VR headset expands peripheral vision to a 114-degree field of view, bringing the field of vision closer to that in our everyday life.

Google’s Pixel 3,2 for example, captures 15 photos at a time when you press the shutter release, and then interpolates them into a super resolution photo that’s sharper and larger than would be possible by taking the photo in the traditional way. It does this, in part, by comparing similar pixel regions in each of the various photos. Ordinary camera shake from hand-holding the phone allows the software to sample each part of the image slightly differently, for an overall better image. 

 

There’s also High Dynamic Range (HDR), which uses a series of photos shot at various exposures to increase the dynamic range of photos and video. This solves an age-old problem; the human eye has significantly more dynamic range (in photography terms, about 30 stops, or a dynamic range of about 1,000,000,000:1) than film or digital cameras (both of which top out around 12 stops, or just 4,000:1). Any scene that has very bright and very dark elements will invariably require a “compromise” in setting the exposure – hence, flat and lifeless sunsets. 

 

But (n)K resolution comes to the rescue. Like the library of textures that need to be rendered in VR, computational photography relies on an enormous library of photos to be captured and processed in the background to make modern photography possible. The numbers are sobering. When you consider the commonly cited statistic that more photos were taken this year than in the whole previous history of photography,3 it becomes clear that we’re entering a whole new epoch for processing and storage.  

“We’re at the moment in the industry where we finally can have a ‘what you see is what you get’ experience.”

Sébastien Deguy

Vice President of 3D and Immersive, Adobe

Ray Tracing

For decades, ray tracing has been an arcane side-branch of computer graphics. By tracing the path of every ray of light entering the frame, a ray-traced image can reach otherwise unapproachable levels of detail and realism, with high-fidelity reflections, water effects, lighting and shadows. Translucent materials can be accurately modeled, along with how light and colors realistically scatter through glass, crystal, water, and other tricky real-world materials. Ray-traced visuals are almost immediately distinguishable from traditionally rendered scenes, which take shortcuts to simulate what a scene would look like in its virtual environment. 

 

Traditionally, ray tracing has required enormous amounts of computational power, and rendering even a short scene just a few years ago could take hours, days, or weeks depending upon the complexity. They were the domain of Hollywood, which had the lead time to render lengthy video sequences, and even there rarely applied. 

 

“To get anything close to photorealism, we used to write complex shaders specific to the frames being rendered”, says McKegney. But that has changed. Now, PBR materials and ray tracing are a routine part of the lighting toolkit in film, gaming, and other rendering applications. “Ray tracing is particularly geared towards managing reflections and refractions, and so it’s a complementary technique to global illumination,” says Petit. 

Still image from the new Unreal Engine 5 demo, “Lumen in the Land of Nanite,” that highlights two of the latest generation’s core technologies: Lumen, for real-time dynamic global illumination, and Nanite, for high geometric detail micropolygon geometry modeling.

Thanks to a new generation of graphics processors like the NVIDIA® Quadro RTX™ family of graphics cards that are capable of rendering ray-traced scenes in real time, the demand for real-time rendering and buffering of ray-traced imagery will soon rival that of traditionally rendered video content. 

 

For Sébastien Deguy, Vice President for 3D and Immersive at Adobe, this is a critically enabling technology “It’s a total game changer. We’re at the moment in the industry where we finally can have a WYSIWYG [What you see is what you get] experience in 3D. You don’t have to hit the button and wait for hours to have it render. It’s completely photorealistic and it’s in front of you all the time during the creation process.”

Implications

There was a time when display resolution was a major limiting factor in the pursuit of image quality. And while display resolution will continue to grow to accommodate larger displays and novel applications like VR, (n)K resolution has important implications, ensuring resolution independence for scalable and realistic texture maps, and for the enormous number of pixels being produced behind the scenes for tasks like computational photography. 

 

Every sign points to the need for exponential growth in graphics processing, memory, and storage needs to keep pace with these files, which are likely to continue to grow in an unbounded way. “Everything is going to be photorealistic, and that means we need better hardware to create it and better hardware to experience it,” says Deguy.

Sources:

  1. https://www.vive.com/us/product/vive-virtual-reality-system/
  2. https://store.google.com/product/pixel_3
  3. https://www.macfilos.com/2015/09/01/2015-9-1-more-photographs-taken-in-one-year-thanin-the-entire-history-of-film/

Conclusion

The technology needed to create photorealistic content and the demand to consume it have caught up with one another: It has never been more true that if you can imagine something, it’s now possible to create it – assuming you have the right tools and enough horsepower, that is. All this has created a huge opportunity for creative professionals, as well as challenges. 

 

“It’s a hundred percent true we are being asked to produce more and more photorealistic content,” says Ross McKegney, Director of Engineering for 3D and AR at Adobe. That means creators need to invest in the tools – both hardware and software – needed to be successful. Increasingly, rendering engines are going to place unprecedented demands on hardware as they use machine learning to help streamline and automate the design and creation process. At the same time, memory and storage needs are growing to keep pace with ever-larger textures, polygon and facet counts, the assistance of machine learning algorithms, and the display resolution. 

 

At the same time, AI is on the cusp of being able to do so much more in support of creators: Danny Lange, Vice President of AI and Machine Learning at Unity Technologies, says, “Computers can be creative. There’s no doubt in my mind about that.” It’s insights like that which lead Shane Griffin, visual artist, director and Z by HP Ambassador to yearn for what’s right around the corner. “[The apps I want to use] haven’t been written yet. I’d love to be able to use some deep learning tools with my own images to see what it would create.” 

 

Getting there is the challenge, though. Some PCs, which haven’t been designed with these kinds of applications in mind, may not be up to the challenge. As you prepare to move forward into the future of photorealistic real-time rendering, ray traced lights and shadows, (n)K projects that seamlessly render at whatever tomorrow’s resolution will be, and digital assistants that use computational techniques to streamline and automate your workflows, be sure you are poised for success with hardware that can meet the demands of your software and the work you do. 

 

Z by HP equips creative pros with the powerful tools they need to fuel their creative evolution. Check out more at hp.com/creativepros.

Z by HP for Creative,
M&E Pros

From video editing and 3D animation to game development in VR — reach peak creative flow on your heaviest projects with Z by HP laptop and desktop workstations.

Learn More

Exceptional Creating
with Intel® Core™ Ultra 
and Intel® Xeon® Processors.

Previous 

Next

Meet the Products

Z by HP Laptops

Learn More

Z by HP Desktops

Learn More

Premium Monitors

Learn More

Have a Question?
Contact Sales Support. 

Follow Z by HP on Social Media

Instagram

X

YouTube

LinkedIn

Facebook

Monday - Friday

7:00am - 7:30pm (CST) 

Enterprise Sales Support

1-866-625-0242 

Small Business Sales Support

1-866-625-0761

Monday - Friday

7:00am - 7:00pm (CST) 

Government Sales Support 

Federal

1-800-727-5472

State and local 

1-800-727-5472

Go to Site 

Monday - Friday

7:00am - 7:00pm (CST) 

Education Sales Support 

K-12 Education

1-800-727-5472

Higher Education

1-800-727-5472

Go to Site  

Monday - Sunday

9:00am - 11:00pm (CST) 

Chat with a Z by HP Live Expert

Click on the Chat to Start

 Need Support for Your Z Workstation? 

Go to Support Page

Disclaimers
  1. Product may differ from images depicted.

     

    The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

  2. The trademarks used herein are owned by their respective owners. This report reflects information from industry leaders and organizations who, unless indicated, are not affiliated or associated with HP. Third party trademarks and links are provided only as a convenience and for informational purposes and do not represent any endorsement by their respective owners of HP.

  3. Intel, the Intel logo, Core and Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Microsoft and Windows are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. NVIDIA, the NVIDIA logo, and NVIDIA NGC, NVIDIA Omniverse, NVIDIA RAPIDS, NVIDIA RTX are trademarks and/or registered trademarks of NVIDIA Corporation in the United States and other countries. Linux® is the registered trademark of Linus Torvalds in the U.S. and other countries. AMD is a trademark of Advanced Micro Devices, Inc.

  4. 4AA7-7936ENW, July 2020