Connect with us

Adobe

Generative AI now embedded directly into Adobe Express workflow

Published

on

Adobe has turned up the heat on Australia’s graphic design platform Canva by releasing a new beta version of Adobe Express.  The latest version of Express brings Adobe’s popular photo, design, video, document, and generative AI tools into a new all-in-one editor.

Firefly generative AI is now embedded directly into Express workflows, enabling creators at various skill levels to generate images and text effects with simple text prompts to enhance social media posts, posters, flyers, and more. Firefly content is trained on a unique dataset and tagged with Content Credentials, bringing critical trust and transparency to digital content wherever it travels.

Image Matrix Tech sat through a demo of the various creative options and they were fast to render using Firefly. The great news is no one needs a Creative Cloud account to use it. It’s no one desktop, coming soon to mobile.

Generative AI embedded in Adobe Express

“The new release of Adobe Express brings together technology from Photoshop, Illustrator, Premiere and Acrobat with our Firefly generative AI models into a fun and easy web application experience allowing everyone, from individuals to large organisations to create content that stands out,” said David Wadhwani, president, Digital Media Business, Adobe.

“Creators can now make stunning videos, designs and documents faster than ever before and our seamless workflows with our flagship applications give Creative Cloud subscribers even more control over the creative process.” 

Standout from the crowd with easy to make, elaborate posts

ADOBE TEAMS WITH GOOGLE

Adobe is partnering with Google to bring Firefly and Express to Bard, Google’s experimental conversational AI service. In the coming months, Firefly will become the premier generative AI partner for Bard, powering and highlighting text-to-image capabilities. With the new Bard by Google integration, users at all skill levels will be able to describe their vision to Bard in their own words to create Firefly generated images directly in Bard, and then modify them using Express to create standout social posts, posters, flyers and more with inspiration from the largest collection of beautiful, high-quality templates and assets. 

Additional Innovations in Adobe Express Include: 

·     New all-in-one editor gives users the ability to make high-impact design elements, engaging video and images, stunning PDFs, animation, and standout content ready for Instagram, TikTok and all their favorite channels and platforms. 

All-in-one editor 

·     Firefly integrated into Express makes it possible to generate custom images and text effects from just a description, using Text to Image and Text Effects features. 

·     New video, multiple page templates, and design elements bring even more inspiration to the largest collection of beautiful, high-quality content, now with nearly 200 million assets including video and design templates, royalty-free Adobe Stock images, video and audio assets, almost 22,000 fonts, plus more icons, backgrounds, and shapes. 

·     PDF support in the new all-in-one editor makes it even easier to import, edit and enhance documents to create visually stunning PDFs. 

·     More AI-power helps creators to take the guesswork out of design, and quickly find the perfect addition to content or get personalised template recommendations that fit unique styles, to create social media posts, videos, posters, flyers and more. 

Edit video

·     Quick actions like remove background in images and videos, animate a character using just audio, convert to GIF, and edit PDFs, makes it even easier to create standout content quickly and simply. 

·     Integration with Experience Manager Assets streamlines content planning, creation, collaborative review, distribution, and analysis, ultimately accelerating content velocity across an organisation. 

·     Real time collaboration and seamless review and commenting capabilities add speed to the creation process. Easily access, edit, and work with creative assets from Photoshop and Illustrator directly within Express, or add linked files that always stay in sync across apps. 

Real time collaboration

·     Animations like Fade In, Pop, Flicker, Bungee bring text, photos, videos, and assets to life in a new way. With Animate from Audio, powered by Adobe Character Animator, watch characters come to life with lips and gestures syncing to recorded dialogue. 

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Adobe

Leica M11-P: The Camera You Can Trust

Published

on

The surge in AI imagery and altered photos from war zones is creating a nightmare for society. So it’s with great relief to see Leica launch the world’s first camera with Content Credentials built-in.

Leica has deployed the global Coalition for Content Provenance and Authenticity (C2PA) standard in the M11-P. This ensures each image is captured with secure metadata.

It carries information such as camera make and model, as well as content-specific information including who captured an image and when, and how they did so.

Each image captured will receive a digital signature, and the authenticity of images can be easily verified by visiting contentcredentials.org/verify or in the Leica FOTOS app.

This is a big moment for the credibility of photography, especially photojournalism.

“We are thrilled to announce that industry-leading camera manufacturer Leica is officially launching the new M11-P camera — the world’s first camera with Content Credentials built-in,” said Santiago Lyon, Head of Advocacy and Education, CAI.

“This is a significant milestone for the Content Authenticity Initiative (CAI) and the future of photojournalism.

“It will usher in a powerful new way for photojournalists and creatives to combat misinformation and bring authenticity to their work and consumers, while pioneering widespread adoption of Content Credentials.”

So from the first capture to multiple edits, you’ll have a complete record of changes an image goes through.

WATCH DJURO SEN EXPLAIN LEICA M11-P CONTENT CREDENTIALS ON SKY NEWS WEEKEND EDITION

Here’s how it works.

First: A photograph is created with the Leica M11-P with Content Credentials activated.

Second: Content Credentials in Adobe Photoshop is enabled and the image from a Leica M11-P is imported. Content Credentials for the image can be verified in the menu.

Third: The photograph is altered using Photoshop. This edit becomes part of the file’s Content Credentials.

Fourth: The edited image is exported from Photoshop and inspected using Verify (contentcredentials.org/verify). This website verifies the image and changes made to it.

Adobe co-founded the Content Authenticity Initiative in 2019 to help combat the threat of misinformation and help creators get credit for their work.

CAI is a coalition of nearly 2,000 members, including Leica Camera, AFP, the Associated Press, the BBC, Getty Images, Microsoft, Reuters, The Wall Street Journal and others.

The system works if everyone buys into it. At this point it’s easy to strip out the metadata by taking a screen grab, for example. But it’s a step in the right direction and that step is badly needed.

Continue Reading

Adobe

Adobe Max: From Flexible Textile Displays to Insane Video Editing – All of Adobe’s New AI Tech

Published

on

Adobe MAX 2023 is one of the those events that produced so much news, it will be analysed for months. Adobe promotes the event as the world’s largest creativity conference with new tools that make creating content easier. At the core of these tools is generative AI – Adobe Firefly.

We’ll look at the general announcements from the conference in another story but for now, let’s look at what Adobe engineers are planning to rollout in the very near future. Some of these innovations are just extraordinary. I covered a couple of mind-blowing innovations during my weekly Sky News technology segment on Sunday and you can see that clip below.

Watch Djuro Sen explain Adobe MAX innovations on Sky News Weekend Edition

Keep reading for more detail about the 11 inventions that featured in the “Sneaks” showcase.

ADOBE SNEAKS – 2023 PROJECTS

Project Primrose3D & Design

Project Primrose, Adobe says, blurs the line between technology and fashion. In the live demo on stage the crowd was blown away as the dress changed patterns in real time. Flexible textile displays have enormous potential. A ‘live’ creative canvas that can be worn opens up a massive range of possibilities.

The interactive dress on show was displaying content created with Adobe Firefly, After Effects, Stock, and Illustrator.

Project Stardust – Photos

Project Stardust is an object-aware editor that moves or removes objects just by clicking on them. Users can easily select, edit and delete complex elements in any image – enabling them to select persons in a photograph, move them to a different place in the composition and fill in the background where they were previously standing.

Watch Project Stardust demo

Users can also change elements like the colour of a person’s clothing and the position in which they’re standing – treating any image like a file with layers.

Project See Through – Photos

Project See Through is an AI-powered tool that makes it simple to remove reflections from photos. Glass reflections can absolutely ruin your images so it’s best to use a polarising filter when you take the shot. But most people just use a phone so this application is a lifesaver!

Adobe makes reflection removal a snap

Removing reflections using existing software is possible but it can lead to poor results. From what we saw on stage, See Through does a great job and it’s simple.

Project Fast Fill – Video

Project Fast Fill is incredible. By simply typing a prompt in the editor you can change the area selected. In the example below the program created a tie and mapped it to the person’s shirt. This isn’t a big deal for still images but for video .. it’s amazing. This is Adobe starting to flex its AI muscles and it’s going to change everything.

Project Fast Fill offers an early look at what human-prompted generative AI could enable inside Adobe video editing tools including Premiere Pro and After Effects.

Project Scene Change – Video

Project Scene Change makes it easy for video editors to composite a subject and scene, from two separate videos captured with different camera trajectories, into a scene with synchronised camera motion.

Point cloud data is used to make a composition of separate videos

Think about it. You can be anywhere you want and it doesn’t matter if you get the motion or angles wrong.

Project Res Up – Video

Project Res Up is a tool that easily converts video from low-to-high resolution using innovative diffusion-based upsampling technology.

There are many programs that ‘up res’ from lower source video but the results are all over the place. Again, this appears to simplify the whole process with promising results.

Project Dub Dub Dub – Audio

Project Dub Dub Dub is one of those programs that really appears to be magic. It automates the video dubbing process from one language to another. This is usually expensive and time consuming. But Adobe’s shown it can be done at the push of a button.

Thanks to Project Dub Dub Dub’s AI capabilities, a recording or audio track of a video can be automatically translated to all supported languages while preserving the voice of the original speaker, temporally aligned with the original dialogue and ready to publish.

Project Poseable3D & Design

Project Poseable represents a breakthrough in AI-based image generation models, making it easy for the image generator to seamlessly interact with large 3D objects, including poses from photos of real people.

The prototype is integrated with Project Poseable which can create a 3D rendering from text input, take depth and perspective into consideration, and re-align the object. While creating 3D scenes with traditional skeletal systems is technically complex and time-consuming, Project Poseable offers a painless and simple alternative.

Project Neo3D & Design

Project Neo enables creators to incorporate 3D shapes in their 2D designs – all without requiring technical expertise in 3D creation tools.

Today, creators of 2D designs such as infographics, posters, or logos are often limited in their ability to incorporate 3D elements given the complexity of those workflows, which can require years of experience.

Project Neo enables creators to embrace simplified 3D design within 2D tools and methods.

Project Glyph Ease3D & Design

Project Glyph Ease makes customised lettering more accessible by streamlining the tedious design process of glyphs – the specific design and shape elements of each letter character.

Beginning with a hand-drawn letter shape, Project Glyph uses AI to automatically generate an entire set of glyphs matching the input lettering style. Users can then easily edit the generated glyphs in Illustrator.

Project Draw & Delight 3D & Design

Project Draw & Delight can transform a pathetic, simple sketch into a professional drawing worthy of publication. Using generative AI tools, Project Draw & Delight can make anything on paper look good.

Once you’ve improved the scribble using AI, you can experiment with colour palettes, style variations and different backgrounds.

Image Matrix Tech will continue to dive into the Adobe MAX pot of gold over the coming weeks, breaking down the updates and protypes that will make the life of creatives much easier.

Continue Reading

Adobe

Image Matrix Tech on Sky News: Dyson 360 Vis Nav

Published

on

In Sunday’s segment on Sky News Australia (Weekend Edition with Tim Gilbert) we took a look at Dyson’s new robot vacuum cleaner, Dyson 360 Vis Nav.

For more than 20 years Dyson has been working towards the ultimate robot vacuum cleaner .. so have they done it? The first thing you notice is the shape. The D shape means it can clean corners and edges properly. It amazes me that most robot vacs are round because they just don’t do corners or edges well.  

The Dyson 360 Vis Nav is the most powerful robot vacuum on the market by a long way – probably every made – so in that area it blows the competition away. Dyson says it’s 6 times more powerful than then the next best. It makes sense when you look at what’s inside. 

Surprisingly, Australia and Japan are the first markets to get access to the 360 Vis Nav. It costs A$2399.

PHOTOSHOP JUST GOT A MAJOR UPGRADE

Adobe has integrated Firefly directly into Photoshop

This is massive news from Adobe. Firefly – Adobe’s AI assistant – is now available in Photoshop. The key feature: Generative Fill. 

The new Firefly-powered Generative Fill is the world’s first co-pilot in creative and design workflows, giving users a new way to work by easily adding, extending or removing content from images non-destructively in seconds using simple text prompts. So .. you just type what you want and Photoshop makes it happen. Firefly’s first model is trained on Adobe Stock images, openly licenced content and other public domain content without copyright restrictions.

Download the Photoshop Beta and you must have an Internet connection to use it. 

NEW DISPLAY TECHNOLOGY IS OUT OF THIS WORLD

Samsung Display showed off a new sensor OLED display that can recognise fingerprints anywhere on the screen

Display Week in LA is all about screens and here’s a few futuristic ones from Samsung.

  • Samsung Display unveiled a new Sensor OLED Display that can recognise fingerprints ANYWHERE on the screen and even check cardiovascular health. Sensor OLED Display can measure the user’s heart rate, blood pressure and stress level simply with the touch of two fingers. This is Sci Fi stuff.
  • Flex In & Out, a new foldable phone concept that can be folded both inward and outward 360 degrees. The ‘in-folding’ form factor, which can only be folded inward, requires a separate external panel to view information while folded, but the Flex In & Out is able to overcome that, opening up the possibility of lighter and thinner foldable phones.
  • Rollable Flex expands more than five times, from 49 mm to 254.4 mm in length. While conventional foldable or slidable form factors offer up to three times the scalability, Rollable Flex overcomes such limitations by enabling the display to be rolled and unrolled on an O-shaped axis like a scroll. Think about it  .. you could just roll up your TV and carry it under your arm.
Continue Reading
Advertisement

Recent Most Popular