What will the next generation of CC Video apps bring?

Last week at NAB 2014 we revealed what you can expect from the next generation of Adobe Premiere Pro CC, After Effects CC, Prelude CC, Audition CC, SpeedGrade CC, Story CC Plus, Adobe Media Encoder CC and Adobe Anywhere.

Even for me many of the new features are quite exciting. While I am not a video pro I do use Premiere/After Effects/Audition/SpeedGrade to enhance the video tutorials I create/produce. And I am certain that many of the new features will make me more productive. For sure.

If you want to read and see the demos jump here (for videos) and here (for the complete list of new features).

Instead of simply listing some of the new features here, I’ll just say this: the next version is all about making you more productive when working with video and sound. And with video and sound you always work with a number of different apps (some for editing, some for video effects or color grading, other for sound editing and so forth). Thus in order to be productive it is not enough to have great software for each  step in the editing process, it is mandatory to have these individuals pieces “talking” to each other to smooth out the whole process.

And this is the essence of the most important updates: there will be better and tighter integration between Adobe Premiere Pro CC and friends.

I still remember the first time I edited a video tutorial in Premiere Pro CC. It was about 5 years ago. My current workflow couldn’t be more different. Both in terms of the end result and how easy I can achieve the vision I have in my head :).

Flash Builder’s Lost Features: Profiler

Here is the second episode of the Flash Builder’s Lost Features show. This time I chose to talk about Flash Builder’s profiler and give you enough info to feel comfortable using it if it’s new to you. Profiler helps you to locate memory leaks, identify excessive object allocation, or analyze execution times.

With the extension of the Flash Platform on mobile devices, I think that it’s more important than ever to build Flash applications that run efficiently for a long time. Here is the video (you can watch the video in a higher resolution here).

If you want to read more about the “art” of profiling then please take the time to read the official (here and here) and Ilan Avigdor’s article.

AIR apps for viewing Android pictures on desktops

This week I had time to play with another idea for Android/Desktop applications: a picture viewer. My friend Alex Chiculita from the AIR team gave me this idea. A couple of weeks ago he played with a multi-screen application that let you load a picture from a device and send the picture to all the other devices connected to the same Wi-Fi network (the app runs on Android, Windows, MacOS, and Linux).

While playing with his application I realized that I could transform it into something more helpful (at least for me). Here is the challenge: we all use our smartphones for a lot of things, including taking pictures. Having a decent digital camera (this is what a smartphone became lately on top of a mobile phone) with you all the time means you can take interesting pictures. And usually you want to show these pictures to your friends or family. However, here is the problem: while taking pictures is extremely easy, sharing them involves cables,  Bluetooth, or seeing the picture on the phone’s screen.

My solution to this problem is AndroidPictures (the above pictures show AndroidPictures in action on my mobile). This Android application lets you browse through the pictures taken with the phone and scale/rotate/pan them. On the desktop, you use the companion AIR application for AndroidPictures, which displays the pictures sent by the Android application. All you have to do to see the pictures with your family is:

  • connect your Android phone to the WI-FI network;
  • start the AndroidPictures app on your Android phone, and start the PicturesViewer app on one of/all your computers;
  • what you see on your mobile phone will be replicated on all connected computers.

Watch the video below to see how it works.

The making of

I used Adobe AIR and Flex 4.1 for creating the Android and desktop applications. In order to connect the Android application to the desktop apps I used Peer-to-Peer direct routing (the same approach used in my previous app). As I already explained, if your local subnet (for example your home Wi-Fi) supports broadcasting then you can create a NetConnection without using Stratus or a Flash Media Server (you connect the NetConnection to “rtmfp:”). This is one of the new features available in Flash Player 10.1 and Adobe AIR 2.

Once you have the clients connected, you can send messages from any one to all of them. And the cool thing about using this approach as opposed to Socket servers is that you don’t have to manually manage all the clients. Your program sends a message and the clients decide how to handle the message. You simply don’t care how many clients are connected. You can read more on my fellow evangelist Tom’s blog.

Because I wanted to use this approach for other apps, I created a simple library (you can get the library’s source code from here; unzip the archive and import the PeerToPeer.fxpl project in Flash Builder). You’ll find three ActionScript classes, and the magic happens inside of MultiCastingService class. The public interface of this service is:

  • isReady
  • neighborCount
  • userName
  • connect()
  • disconnect()
  • post()

The service throws these events:

  • ServiceEvent.CONNECTED
  • ServiceEvent.DISCONNECTED
  • ServiceEvent.PEER_CONNECT
  • ServiceEvent.PEER_DISCONNECT
  • ServiceEvent.RESULT

The simplest way to use this service would be:

    var service:MultiCastingService = new MultiCastingService();
    service.addEventListener(ServiceEvent.RESULT, onResult);
    service.addEventListener(ServiceEvent.CONNECTED, onStatusChange);
    service.addEventListener(ServiceEvent.PEER_CONNECT, onStatusChange);

    service.connect();

    private function onResult(e:ServiceEvent):void {
        if (e.what == "picture") {
            //do something with the bytes: e.body
        }
    }

    private function onStatusChange(e:ServiceEvent):void {
        if (e.type == ServiceEvent.PEER_CONNECT) {
            if (service.neighborCount > 0) {
                //others are connected; send a String message
                service.post("this is my message");
            }
        }
    }

For the Android app I had to tweak the Spark List in order to make it works with both touch and click events. For the picture interaction I used a library created by Tim Kukulski, a member of the Adobe XD team. This library makes it easy to interact with pictures by letting you use gestures like zoom, pan, or rotate.

The desktop application waits and responds to two kinds of messages: picture bytes and pictures transformations (rotation, zooming, or panning). Every time a picture is selected in the Android app, I grab its bytes and send them through the “wire”. When I transform a picture in the Android app, I grab the Matrix and send it to all the connected clients. The client applies the Matrix on the picture. And the rest is history :)

All in all it was pretty easy to put together these apps and I had a lot of fun while doing this. If I have the time, I will try to see if I can play the movies recorded with my Android by extending the current code.

Getting the apps and source-code

You can download the source code from here, install the desktop application from here, and the Android application from here. If you want to run the Android application, you need to install Adobe AIR on your Android (more info here).

If you have ideas for more applications that take advantage of having AIR running on Android phones and desktops please let me know. If you create something interesting, I’d love to hear about. I already have another cool idea, this time more complex and even more fun!

Have fun with the Flash Platform on multiple screens!

Creating multi-screen apps for Android and desktop using AIR

Today, I finished a project I’ve been working since last week: a desktop MP3 Music Player that can be controlled by any number of Android phones. I built these apps using Adobe AIR and the Flex framework. Below you can watch a video with these apps in action, running on Motorola Droid, Nexus One, and my laptop (you can watch here the video in a higher resolution).

The communication between the remote controls (AIR apps running on Android phones) and  desktop player is done using the peer to peer features of AIR 2 and Flash Player 10.1. Basically if all the parties are connected to the same subnet and if the network allows broadcasting, then you can create a group and send messages to all the members without the need of Stratus or some other service/server.

Actually, while working on this project I created a small class that enables you to quickly create clients who connect to a local network. Of course, this is only one way of connecting two or more clients. You can use sockets if you want, or one-to-one communication (peer2peer). But I think in both these cases you have to work more, because you have to manually manage all the parties involved. If you want to find out more about peer2peer features of the Flash Platform take a look at this MAX session and read my fellow evangelist Tom Krcha’s blog.

The Android app was more fun to build because I used the touch input mode along with click input. I enjoyed a lot tweaking James Ward’s code for scrolling a Flex List. Believe it or not, again I used Illustrator and Flash Catalyst a lot to create the skins or parts of them.

Until I have the time to put together an article explaining  how these apps were created, please enjoy the video and play with the apps: desktop file and APK file. And from here you can download an archive with the source code. If you need the Adobe AIR runtime or AIR SDK for Android, please sign in for the pre-release group here.

What do you think?

Secret agencies catching up on Flash Builder 4

After we’ve launched Flash Builder 4, I received a number of strange calls. Luckily for me, I had my camera nearby and I was able to record the calls. I edited them a little bit, you don’t want to mess with the secret agencies :D. Here’s the video:

This is the catch: if you recognize who’s impersonating the American, English, and French agents, you get a Flash Builder 4 license (please don’t ask me about the Russian guy, I have a family :D). There are three guys to recognize and I’ll give away three licenses (for each correct guess you get a license; you can get only one right and you’ll get the license). Hurry up, there are only three licenses. Just drop a comment (make sure you fill in your e-mail address)! Good luck!

LATER UPDATE: There’s one more license for who’s getting right the person doing the American. I’ll give you a hint: he’s an American and he is part of the Flash Platform.

LAST UPDATE: I stopped the contest. The American is Bill Heil, platform product manager for Flash Builder, he’s overseeing server services area.

I know this one wasn’t easy at all. But then I never said it will be :D . Next time you should know better!

I did the raffle and the winner of the last license is Mr. Benz. Congratulations!

Thanks to everyone for taking part and I hope you had a great time!

Why Flash: Interview with Mindomo.com

Last week I had the pleasure of interviewing Zoltán Lörincz, the guy behind Mindomo.com. Mindomo.com is one of the coolest mind mapping apps out there and it happens to be a Flex application.

I met Zoltán in 2007 when we talked about Flex Builder 2 and what they want to see in Flex Builder 3. At that time Mindomo.com was already in the market, and we knew that we could get a lot of valuable feedback about Flex and Flex Builder from someone who created such a complex app. Since that time they added many new features were added and created a desktop version using AIR. Another interesting feature of their application is the toolbar itself. They implemented probably the only Microsoft Ribbon in Flex out there.

In this interview you can find a short story about the birth of Mindomo.com, and you can learn some of the key features of this application.

I apologize for the image/sound quality. I did this interview remotely and the sound was recorded from the phones.

Enjoy!

Magnifying Glass AIR 2 application or how to communicate with a Java program from AIR

My favorite feature in Adobe AIR 2 is, by far, Native Processes: the ability to launch and control and communicate with a native process. It could be any executable on the machine where the AIR application is installed. I think this feature opens up a whole new range of AIR applications. When you add this feature to the ability to create socket servers, you have a powerful platform to build RIA applications for desktops.

Once I heard that this feature would make it in AIR 2 I was very excited. Why? Well, back in 2008 when we launched AIR 1.0, my fellow evangelist Serge Jespers created one of the coolest AIR applications for the AIR Tour. It was the smallest video player in the world. Basically it let you watch videos in the application icon from the Dock.

The application is extremely cool, but it has a small issue: it is too damn small to be able to see what’s going on. Being an engineer, I spent some time trying to find an engineering solution. Of course, I could have asked Serge to rewrite the application to make it bigger, but this wouldn’t have been an engineering solution. It would have been something that an accountant or manager would come up with. My solution is to build a second AIR application that can be used to magnify the video played inside the icon. This application would act like a digital magnifying glass.

With AIR 2 I’m finally able to implement the magnifying glass app pretty easily. Below you can see a screenshot of my application in action. It has two windows. The first window is the view port of the magnifying glass. You can see how many frames per second it processes, you can control the amount of zooming, and you can drag it around your screen. The second window displays the magnified image.

mg_1

The internals

How did I do it? The application has two main parts. One part is the AIR application itself. It renders the UI, controls the view port and the zoom factor, and scales the image.  The second part is a Java program that captures a screenshot of a portion of the screen. The Java program is controlled by the AIR application.

Using the NativeProcess and NativeProcessStartupInfo classes from AIR 2, you can launch an executable. In order to communicate with the executable you can use standard input and standard output. I wrote the Java program to output the bytes of the screenshot to standard output. It listens to standard input for commands, such as take a shot, set the viewport, or terminate the program. I compiled the Java program as an executable JAR file and placed in the AIR application root folder.

In order to capture the output of the Java program all you have to do is to register a listener on the NativeProcess instance for the standard output events. When you want to send commands you write bytes to the standardInput property of the same object. Here is a snippet of code, for the complete code have a look at the ScreenShotService class from the AIR application.

   1: private var nativeProcess:NativeProcess;
   2: private var npInfo:NativeProcessStartupInfo;
   3: //setting the arguments for starting the Java program
   4: var arg:Vector.<String> = new Vector.<String>;
   5: arg.push("-jar");
   6: arg.push(File.applicationDirectory.resolvePath("screenshot.jar").nativePath);
   7: arg.push("130");
   8: arg.push("100");
   9:
  10: npInfo = new NativeProcessStartupInfo();
  11: //setting the path to the native process
  12: npInfo.executable = new File("/Library/Java/Home/bin/java");
  13: npInfo.arguments = arg;
  14:
  15: nativeProcess = new NativeProcess();
  16: nativeProcess.addEventListener(ProgressEvent.STANDARD_OUTPUT_DATA, onStandardOutputData);
  17: //start the process
  18: nativeProcess.start(npInfo);
  19:
  20: /**
  21:  * Read the data from the standard ouput.
  22:  * Before reading a png, first you have to read the length of the image
  23:  */
  24: private function onStandardOutputData(e:ProgressEvent):void {
  25:     //reading the available bytes from the standard output buffer of the process
  26:     nativeProcess.standardOutput.readBytes(_processBuffer, _processBuffer.length, nativeProcess.standardOutput.bytesAvailable);
  27:     ...
  28: }
  29:
  30: //sending take command to the Java process
  31: nativeProcess.standardInput.writeMultiByte("take\n", "utf-8");

This is the relevant Java code (you can find the complete code inside the source folder of the application, ScreenShot.java):

   1: /**
   2:  * @param width of the screen capture
   3:  * @param height of the screen capture
   4:  * @param args
   5:  */
   6: public static void main(String[] args) {
   7:      if (args.length == 2) {
   8:          width = Integer.parseInt(args[0]);
   9:          height = Integer.parseInt(args[1]);
  10:      }
  11:
  12:     ScreenShot s = new ScreenShot();
  13:     BufferedReader in = new BufferedReader(new InputStreamReader(System.in));
  14:     String text = "";
  15:     String[] tokens;
  16:
  17:     while (true) {
  18:         try {
  19:             text = in.readLine();
  20:             if (text.equals("take")) {
  21:                 s.capturePortion(x, y, width, height);
  22:             } else if (text.equals("terminate")) {
  23:                 return;
  24:             } else if (text.length() > 0) {
  25:                 tokens = text.split("\\|");
  26:                 if (tokens.length < 4)
  27:                     continue;
  28:                 x = Integer.parseInt(tokens[0]);
  29:                 y = Integer.parseInt(tokens[1]);
  30:                 width = Integer.parseInt(tokens[2]);
  31:                 height = Integer.parseInt(tokens[3]);
  32:             }
  33:         } catch (IOException e) {
  34:             System.err.println("Exception while reading the input. " + e);
  35:         }
  36:     }
  37: }
  38:
  39: /**
  40:  * Capture a portion of the screen
  41:  */
  42: public void capturePortion(int x, int y, int w, int h) {
  43:     try {
  44:         if (robot == null)
  45:             robot = new Robot();
  46:         BufferedImage img = robot.createScreenCapture(new Rectangle(x, y, w, h));
  47:         ByteArrayOutputStream output = new ByteArrayOutputStream();
  48:         ImageIO.write(img, imageType, output);
  49:
  50:         DataOutputStream dataOutputStream = new DataOutputStream(System.out);
  51:         //output the buffer size
  52:         dataOutputStream.writeInt(output.size());
  53:         //output the buffer
  54:         dataOutputStream.write(output.toByteArray());
  55:         dataOutputStream.flush();
  56:
  57:         output.close();
  58:     } catch (AWTException e) {
  59:         System.err.println("Exception while capturing screen. " + e);
  60:     } catch (IOException e) {
  61:         System.err.println("Exception while writting the image bytes. " + e);
  62:     }
  63: }

I am by no means a designer. Still, I think I managed to get a decent look of the main application window using Adobe Illustrator and Flash Catalyst. I created the design in Illustrator, and then using Flash Catalyst I transformed the graphics into a Flex application. And finally using Flash Builder 4 I added the logic.

Source code and native installers

You can download the Flex project from here, Mac installer from here, and Windows executable from here. This program requires Java 5 or newer and the Adobe AIR 2 runtime.

Things to know when working with Native Processes in AIR

In order to enable this feature you need to add the extendedDesktop profile to the application descriptor file. Add this tag as a child of the application tag:

   1: <supportedProfiles>extendedDesktop</supportedProfiles>

When using this feature you can’t package your application as an AIR file for distribution. You have to use the native installer. The easiest way to do this is to export for release from Flash Builder (you get the AIR file you normally use to distribute your application). And then you use adt at the command line to create the native installer. If you want a Mac installer you do it on a Mac, if you want a Windows installer you have to do it on a Windows. The command looks like this:

   1: adt -package -target native myApp.exe myApp.air

More on how to create native installers for AIR applications here (make sure you use the adt from AIR 2 and not one from an older version).

If you see an error like in the picture below when you install an application using the generated native installer, you should create a file named .airappinstall.log in your home folder. This log file can tell you what was wrong. In my case the error was “failed while validating native package: Error: Missing digested package file: .DS_Store starting cleanup of temporary files” (I fixed the problem by deleting the .DS_Store file from the source folder).

mg_2

Finally, you can check at runtime if the application has extended desktop capabilities by using this:

   1: if (NativeProcess.isSupported)
   2:     //extended desktop profile is available
   3: else
   4:     //extended desktop profile is not available

What’s next?

If you haven’t already, download the Adobe AIR 2 runtime and SDK and play with the new features. You can find a nice article about the new features from AIR 2 on Christian Cantrell’s blog.

I already have another idea: what about an AIR application that does screen sharing? Keep an eye out I might be able to pull it off!

PS. Many thanks to my friends Chicu and Raul from the Romanian AIR team for their help.

Later Update: My friend Benjamin Dobler created a nice screen recording application with AIR 2 (it captures the sound as well). Although for now the source code is not available, I still think it is worth having a look.