Now simply add some network cues in Qlab, targeting your newly defined networked ATEM. The name can be whatever label you want to use within QLab. I certainly don’t want to discourage you from experimenting, but I wouldn’t expect the results to be smooth, particularly with multiple iPads connected to one computer. To control your ATEM from QLab, simply set it up as a network device: using 127.0.0.1 or localhost for destination and 3333 (or whatever you choose as the incoming port in atemOSC. If you can tolerate choppy rendering and potentially high latency in your use case, you might still be able to do some neat stuff with it. Encoding and transmitting that entire frame of video is orders of magnitude more work. That’s not the case with video playback, where the entire frame changes on every refresh. the small rectangle around new characters as you type), so there are really very few pixels to send for each refresh. I don’t have any experience with AirDisplay, but I’ve found screensharing for video output is generally choppy and laggy at best (and at worst won’t work, and will just render black).ĪirDisplay and other screensharing technologies generally take advantage of the fact that in typical desktop use, only a very small portion of the screen is normally changing at any time (e.g. I have not tried the Hap codec, since I'm running in 64 bit mode.Ĭould anyone please help me to find my "bottleneck".I’m afraid you’re not likely to get good performance that way. This section of the manual will take you through all the steps necessary to fully configure video output, but it will also help guide you. 15 In QLab Preferences > Audio, check that your audio output is patched to to your external interface. Patch the channel of the audio cue to the correct outputs in the patch. (Again - videos from Qlab to all three projectors look fine.) The video output system of QLab 5 supports a wide variety of workflows and setups, from a single TV built into a set, to multi-projector blends mapped onto complex scenery, to LED walls, to broadcast feeds. that specific output becomes a specific Qlab output). I've tried different codecs - some are a little better, but not as smooth as any of them run i Qlab. It makes no difference if we use a Syphon source or if we put them directly in the dashboard in Millumin. Now the videos are "jumpy" through Millumin. I think this might be what you are looking for. We've done quite a lot of warping and have also resized and turned two of the projectors 90* since they are hanging on their side to get better coverage. Qlab has an option in the 'tools' menu to also 'Black out' desktop, and 'restore desktop'. We've managed to map three syphon sources nicely using still pictures. So I think the computer is OK.īut since I need more warping than Qlab surfaces can give me I use Syphon into Millumin. QLab still does all its rendering in OpenGL, which is processed by the GPU. When I play my videos directly from Qlab to the projectors the videos look fine. SSD drive,14 GB RAM, two video cards with 4 out in total including the local monitor. We've got a Macpro (early 2008 since Apple can't get me their new one yet.) VPT 7 was downloaded over 100000 times, so in spite o MadMapper. I send my videos from Qlab using three syphon sources in Millumin to be able to warp the video better. Video Projection Tool (VPT) is a free multipurpose realtime projection software tool for Mac and Windows. Getting started with surfaces When you create a new workspace, QLab will automatically add a surface for each attached display, with that display assigned to the surface. We're using three projectors mapped to three walls on stage. To get to the Video Surface Editor, go to Workspace Settings and choose Video from the list on the left, then click the Edit button next to one of the video surfaces.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |