Free Tbc Archives Page 2 Of 2 Hacked For Mac

I'm not entirely sure yet. I believe i can see some posterizing in the Matrox image (e.g.

  1. Free Tbc Archives Page 2 Of 2 Hacked For Mac 2

In the wood). I followed your settings. There's also a 8-12 bit setting that i left at 8. Colors an gamma are great though!

Martin When you want to compare before and after images, I'm a big fan of bringing them into Photoshop, pasting one on top of the other, then using ctl/cmd Z to toggle between them. This is the best way to see differences. Of course, you can always use a higher data rate if there is indeed a problem. I have no idea about the 8-12 bit setting. That must be on the Mac version of Matrox.

Free Tbc Archives Page 2 Of 2 Hacked For Mac 2

It's not on the PC version. PDR, i can't get the uncompressed '-v210' to work correctly with ffmpeg.

It results in a crippled file because of an error near the end of every file during processing: 'avinterleavedwriteframe: 'operation not permitted'. Should i add/force -pixfmt and -vtag parameters? Any suggestions? For the time being, i've come up with -vcodec rawvideo -pxfmt uyvy422 -vtag 2vuy but that also results in the 'avinterleavedwriteframe: 'operation not permitted' error. Any help very much welcomed. Thanks, Martin You won't be able to use ffmpeg for matrox; it's a proprietary MPEG2 format (although free).

Ffmpeg's MPEG2 encoder isn't that great, and cannot reach high enough bitrates I mentioned this earlier, but if quality is the biggest concern, use uncompressed 10bit 422 aka v210. Bitrate is 1000Mb/s so 5x the size of DNxHD 175, but it's lossless On windows, QT treats it as Y'CbCr, so it should treat the same on a Mac.

Colors are the same as the Ninja Prores in QT -vcodec v210 The beta version of FFMBC reports that DNxHD color is fixed for quicktime; but I compiled that rc6 version and did some tests, and still colors differ from the Prores in Quicktime, at least on windows. Maybe it hasn't been implented yet in that version, or there is a special switch I don't know about http://code.google.com/p/ffmbc/. I know some more now. The Avisynth conversion fails on approx. 70% of all GH2 Ninja prores files! The 'avinterleavedwriteframe: 'operation not permitted' error is generated by Avisynth.

This is killing me, because this was an important test shoot. The AVS file is an exact copy of the script on page one of this thread and i have added the input file to my public dropbox: I have no idea what's wrong, because all files were recorded in the same manner. All defect files have the same size of 4.294.940.708 bytes. I really really sincerely hope that there is a simple explanation to this. Martin btw: not+permitted%22 gives a number of results, but are above my station technically speaking.

All defect files have the same size of 4.294.940.708 bytes. Do you mean the input (recorded) files or output (processed) files? That's a bit suspicious, do the recorded files play ok? What file system are you using?

Mac

Are you doing this through virtualization? Or dedicated windows box? Maybe it has problems with large files?

What dropbox file? I'm not really feeling like downloading a huge clip What was your EXACT ffmpeg commandline? Try adding -r 23.976 -vcodec v210 should be enough, try 1 file instead of the batch, until you get things sorted out e.g. Ffmpeg -i input.avs -vcodec v210 -an -r 23.976 output.mov.

I'll do some more tests tomorrow and will supply you with more detailed information. Ps i have removed the file. Do you mean the input (recorded) files or output (processed) files?That's a bit suspicious, do the recorded files play ok?What file system are you using? Are you doing this through virtualization? Mlp source filmmaker download for mac pc. Or dedicated windows box? Maybe it has problems with large files?What dropbox file?

I'm not really feeling like downloading a huge clipWhat was your EXACT ffmpeg commandline? Try adding -r 23.976-vcodec v210 should be enough, try 1 file instead of the batch, until you get things sorted oute.g.ffmpeg -i input.avs -vcodec v210 -an -r 23.976 output.mov. I've found two remarkable things! 1) my HDMI output and interlaced footage is now 25fps instead of 23.976 since i've updated myGH2 firmware and set it to PAL 2) even with assumefps and/or ScriptClip, does every file longer than approx. 774-776 frames crash with an error Questions: what should i do to change the script to process 25fps?

Why does everything work fine up to 774-776 frames? Should i reverse to shooting in 24p ntsc on the GH2? I need to salvage my footage that i have here, because it's a unique one-time-event registration.

Any help is welcome!! I'm suspecting a filesize issue, maybe your virtualization os has a 4GB limitation like Fat32? - That could be why it always crashes at that point and filesize. You would never use AssumeFPS, for pal it would be AssumeFPS(25) If your footage was shot at 50i (camera was set to interlaced 50i), then it's already 50i, correct? Then there's nothing to process. What are you trying to do or 'salvage'?

If it's true 50i content, you could deinterlace it to 25p And if you don't know what it is, it might be a good idea to post a sample; one with motion, preferrably panning. I run this on a brand new Win7 PC, dedicated for the job. The GH2 footage still has duplicate frames (hdmi cripple), therefore i need the avisynth 'hack'. It seems to be the part before filldrops, because commenting that out does not help.

Here is an example of a file that does NOT work - tried everything, including your recent suggestions; crashes at frame 775: (897 mb) I'm suspecting a filesize issue, maybe your virtualization os has a 4GB limitation like Fat32? - That could be why it always crashes at that point and filesize. You would never use AssumeFPS, for pal it would be AssumeFPS(25) If your footage was shot at 50i (camera was set to interlaced 50i), then it's already 50i, correct? Then there's nothing to process.

What are you trying to do or 'salvage'? If it's true 50i content, you could deinterlace it to 25p And if you don't know what it is, it might be a good idea to post a sample; one with motion, preferrably panning. Hi Martin, the original script works ok for me, no crashes, but it produces 2 glitches on that sample (frame out of order), one around frame 47, the other around frame 280. This is unrelated to the 'crashing' that you are getting - I am able to encode either script to completion with ffmpeg This script works ok for me without the glitches: ffmpeg -i input.avs -vcodec v210 -an -r 23.976 output.mov QTInput('026.MOV') AssumeTFF TFM TDecimate(mode=7,rate=23.976) AssumeFps FixBrokenChromaUpsampling ConvertToYV12 FillDrops 1) Can you preview the script in vdub? Can you scrub to the point where it crashes for you (frame 775)? Or past that point? 2) What is the EXACT error message you are getting?

3) What version of quicktime are you using? 4) What version of ffmpeg are you using? 5) What version of QTSource.dll are you using? 6) Are you using an external HDD that is Fat32 formatted? To rule out a file system issue, try encoding with something smaller bitrate like dnxhd 175, just for testing purposes Because each file that crashes is a certain filesize, that is very suspicious of a write limitation. First of all: PROBLEM SOLVED! What differs from my script is TDecimate instead of FDecimate.

I´ve switched off the Norton Antivirus (!?!) and all seems to be ok! Have not yet transferred to FCP, but this looks very promising! Weirdly enough, the avchd is now an interlaced 47.952 stream! I personally don´t mind, but the latest firmware hack seems to do something peculiar to the avchd encoder.

I assume that the HDMI recorded file (026.mov) that i've provided is 23.976 in a 50i stream? Martin Martin. Hi there, hopwfully you lots can help me out. I do own a Blackmagic HyperDeck Shuttle. My GH2 is a pal version and the.mov files recorded out of the HDMI port by the shuttle are seen by Quicktime as 25 frames per second, bit rate 1.12 gb. Now: I run the scipt as per Jorgen Escher's instructions and I get dnxhd @ 002DDCA0 video parameters incompatible with DNxHD error. How do I manage this?

Quicktime dll and everything else seems to be in place. I tried to vary the original script vcodec dnxhd -b 175M with 120M and 185M, according to dnxhd specs realting to 50i or 25p, but no joy. Using win 7 64 bits btw. Hi there, hopwfully you lots can help me out. I do own a Blackmagic HyperDeck Shuttle. My GH2 is a pal version and the.mov files recorded out of the HDMI port by the shuttle are seen by Quicktime as 25 frames per second, bit rate 1.12 gb.

Now: I run the scipt as per Jorgen Escher's instructions and I get dnxhd @ 002DDCA0 video parameters incompatible with DNxHD error. How do I manage this? Quicktime dll and everything else seems to be in place.

I tried to vary the original script vcodec dnxhd -b 175M with 120M and 185M, according to dnxhd specs realting to 50i or 25p, but no joy. Using win 7 64 bits btw type info at the end of the script, what frame rate does avisynth report when you preview the script? And what are the reported dimensions of the clip? Open the.avs in vdub, it should tell you in the left hand corner) also, some ffmpeg builds express bitrate in bits -b 175000000 would correspond to 1920x1080p24, others would use -b 175M. Hi there PDR, thanks for helping. Where should I type this 'info ' line?

And just add it like that, no hyphen or other in the brackets? At the end of the script, after ConvertToYUY2 Check it before you run the batch file (open the avs in vdub). You want to narrow down where the problem is occurring Also post all the information that ffmpeg gives you in the commandline box Isn't the PAL version 24p embedded in 50i stream? Or do you care about audio recorded separately (it will be out of sync if you use AssumeFPS(25) ). The script should return 24p, similar to what was recorded in camera AVCHD.

AFAIK there is no 25p for GH2, unless the firmware hack enabled it? Also - isn't a valid number (PAL is 25.0 FPS not 24.97), the number is for or 23.976 fps for NTSC regions. Ok PDR, done my homeworks. Basically the previuos error was generated becaused the script could not find some of the files during the LoadPlugin function.

The error is quite misleading I must say. I corrected the files location and the script run after I corrected it with your indications. Processall.bat for%%a in ('.avs') do ffmpeg bin ffmpeg -i%%a -vcodec dnxhd -b 175M -pixfmt yuv422p -an%%na.mov pause 2. Gh2chromafix LoadPlugin('C: GH2AviSynth plugins QTSource.dll') LoadPlugin('C: GH2AviSynth plugins TIVTC TIVTC.dll') LoadPlugin('C: GH2AviSynth plugins FDecimate FDecimate.dll') QTInput('C: GH2AviSynth PROCESSING Capture0002.mov') AssumeTFF TFM(mchroma=false, pp=5) FDecimate(threshold=0.5) delayaudio(.07) AssumeFPS # Chroma Fix - Optional, but highly recommended FixBrokenChromaUpsampling # Seperates the YUV channels. YV24 seems neccessary for the split and remerge to work.

ConvertToYV24(chromaresample='spline36') Y = Tweak(sat=0, coring=false) U = UToY V = VToY # Shifts the YUV channels individually. In this case subpixel(halfpixel) shifting. U = U.Spline36Resize(u.width,u.height,0.5,-0.5) V = V.Spline36Resize(v.width,v.height,0,-0.5) Y = Y.Spline36Resize(y.width,y.height,0,0.5) # Manually squeezes the chroma down to 4:2:0 and back. Prevents further shifting. U = U.Spline36Resize(u.width,u.height/2) V = V.Spline36Resize(v.width,V.height/2) U = U.Spline36Resize(u.width,u.height.2) V = V.Spline36Resize(v.width,v.height.2) # Recombines individual YUV channels. YToUV(U,V,Y) # Depending on the source file you may need to convert from YV24 back to YUY2/YV12.

ConvertToYUY2 the.mov files I got could not play in quicktime. I get CAVISTREAMSYNTH: SYSTEM EXCEPTION - Acess Vilation at 0x1a5843e, reading from 0xdc047482 running the avs scipt in virtual dub gaves me the same message! The.mov files I got could not play in quicktime.

I get CAVISTREAMSYNTH: SYSTEM EXCEPTION - Acess Vilation at 0x1a5843e, reading from 0xdc047482 running the avs scipt in virtual dub gaves me the same message! Can you open the original MOV in quicktime player? Do you have quicktime installed? Are you are recording to MOV from Hyperdeck Shuttle? To debug, can you opening a simple script QTInput('video.mov') Make sure you have no weird spaces, in the post above you have some gaps. (It's probably the message board error, you should use code tags to embed in HTML).

QTsource version is the latest tried FFMpegSource2 modifing the script this way. LoadPlugin('C: GH2AviSynth plugins QTSource.dll') LoadPlugin('C: GH2AviSynth plugins TIVTC TIVTC.dll') LoadPlugin('C: GH2AviSynth plugins FDecimate FDecimate.dll') LoadPlugin('C: GH2AviSynth plugins FDecimate ffms2.dll') FFVideoSource('C: GH2AviSynth PROCESSING Capture0002.mov') AssumeTFF TFM(mchroma=false, pp=5) FDecimate(threshold=0.5) delayaudio(.07) AssumeFPS # Chroma Fix - Optional, but highly recommended FixBrokenChromaUpsampling # Seperates the YUV channels.

Free

YV24 seems neccessary for the split and remerge to work. ConvertToYV24(chromaresample='spline36') Y = Tweak(sat=0, coring=false) U = UToY V = VToY # Shifts the YUV channels individually. In this case subpixel(halfpixel) shifting. U = U.Spline36Resize(u.width,u.height,0.5,-0.5) V = V.Spline36Resize(v.width,v.height,0,-0.5) Y = Y.Spline36Resize(y.width,y.height,0,0.5) # Manually squeezes the chroma down to 4:2:0 and back. Prevents further shifting. U = U.Spline36Resize(u.width,u.height/2) V = V.Spline36Resize(v.width,V.height/2) U = U.Spline36Resize(u.width,u.height.2) V = V.Spline36Resize(v.width,v.height.2) # Recombines individual YUV channels. YToUV(U,V,Y) # Depending on the source file you may need to convert from YV24 back to YUY2/YV12.

ConvertToYUY2 but I get a dnxhd error like before! Low Light Tricks for the GH2 Here are some techniques to use with the GH2 when the sun goes down.

Low Light Trick #1 - 1/25 shutter Use 1/25 second shutter speed (when shooting 24P). Some people fear the god of filmmaking will strike them dead if they use a shutter speed other than 1/48 or 1/50. This is silly. By dropping down to 1/25 you gain a full f stop of light for free. It is advisable however, to use a tripod.

Low Light Trick #2 - ISO settings.ISO 3200 is the highest it appears you can go when shooting movies, but in fact you can go to 6400 and 12800. First, do not use Manual mode. Use Shutter priority and set ISO to Auto.

This will enable the ISO to glide up to 6400. You won't see a number that says 6400, but be assured, it really is working at that speed. If you're using a third party manual lens, this is as far as you can go. But if you're using any of the Panasonic Micro Four Thirds lenses, you can go up to ISO 12800 by turning the exposure compensation all the way to the right. Low Light Trick #3 - Indoor white balance Use indoor white balance, even if you're shooting outdoors.

At high ISO's, the GH2 obviously has noise, but the worst noise is in the red channel. Indoor white balance, by it's very nature, supresses the output of the red channel, resulting in significantly less noise. You can always do color correction in post, but usually you'll find it isn't necessary - most light sources you encounter at night are warmer than daylight anyway. Low Light Trick #4 - Fast lenses It goes without saying (but, I'll say it anyway) that you need fast lenses. I only mention it because if all you have are lenses that top out at f4 or f3.5, you're at a severe disadvantage when shooting in low light.

Get yourself some fast glass - f1.7 or lower. Low Light Trick #5 - Variable movie mode This is my favorite GH2 low light trick. By using variable movie mode in conjunction with slow shutter speeds, you can really peer into the darkness. Set variable movie mode to 200% and the shutter speed to 1/13.

The resulting movie will playback at twice the speed, but each frame will be unique - no duplicates. Take it a step further - set variable movie mode to 300% and shutter speed to 1/8. Now you're letting in 3 times more light than a 1/25 shutter and 6 times more light than a 1/50 shutter. Since the movie will play back at 3 times normal speed, obviously this technique is only suitable for certain types of subject material. A tripod is mandatory for shutter speeds this slow.

When you combine all these techniques together, the results can be jaw-dropping, see-into-the-dark movies. @Ralph B.How that measure of ISO 6400 or 12800 could be seen? I just be able to set ISO limit to 3200. Did I missed some thing? You will not see a number that says 6400 or 12800, but the gain is actually working at those levels.

Use Shutter priority, set the ISO to Auto, go into dim light, and the ISO will automatically rise to 6400. You can easily test this by setting the ISO to 3200. You'll see that the picture is half as bright. To get to ISO 12800, you must use a Panasonic lens, do all of the above, then turn the exposure compensation all the way to the right.

The picture will be twice as bright as 6400. Do i need to encode from QT prores MOV 60i 24p DNxHD some avid software? I tried to download drivers for virtualdub, i found: AvidCodecsLE2.1.zip (after install id does not create any folder or files in my program files (win7,64bit).

I dont see dnxhd in virtualdub output compression list, and finaly i found quote to use as a bat app starter ffmpeg -i script.avs -i%%a -vcodec dnxhd -b 175M -pixfmt yuv422p -an%%na.mov which sadly returns me: avs @ 36F0A0 AVIFileOpen failed with error -script.avs: Operation not permitted what am i doing wrong? I ma importing QT prores with 'QTInput' function. Do i need to encode from QT prores MOV 60i 24p DNxHD some avid software? I tried to download drivers for virtualdub, i found: AvidCodecsLE2.1.zip (after install id does not create any folder or files in my program files (win7,64bit).

I dont see dnxhd in virtualdub output compression list, and finaly i found quote to use as a bat app starter which sadly returns me: avs @ 36F0A0 AVIFileOpen failed with error -script.avs: Operation not permitted what am i doing wrong? I ma importing QT prores with 'QTInput' function. Vdub can only export AVI formats directly, so DNxHD won't show up in vdub. (you can use vdub frameserver or use the external encoder feature to get other formats) Does your script preview ok? If not post your full script Your batch file has 2 inputs, there should only be 1 wildcard. Go back a few pages, there are step by step instructions there. Someone re-posted in a blog as well Edit: Look at the bottom of this post http://www.dvxuser.com/V6/showthread.php?237584-HDMI-Capture-Problem-SOLVED-AviSynth-RULES!&p=2301668&viewfull=1#post2301668.

Is there any possibility to export throught ffmpeg to 16bit TIFF sequence to get possible more data in 10bit HDMI record? I had no luck till now in googling and manual does not notice this. Source is CRIPLED NINJA GH2 60i24p recorded in PRORES HQ 4:2:2 and avisynth script seems to downconvert to 8bit space.

The GH2 HDMI signal is 8 bit, you gain nothing going to 16bit or 10bit. You only waste space. The Ninja can only record what it is given. If it makes you feel better you can use v210 (10-bit 4:2:2), but the actual data is still only 8bit piped through the HDMI. Is that right script? Can i use AssumeFPS(24) for cinema24p or it cause some mistake in picture?

GH2 should do 24p record, but it looks from file info that it is actualy 23.987 If you copy & pasted it, that should be the right script. There are some variations you can use, but you have to evaluate on a case by case basis It is actually 23.976 (an approximation). Actual rate is, not 24.0. PROGRESS REPORT NASA has a webpage where they update the status of the two Voyager spacecraft, which were launched in the 1970s. They are still operational and now flying out of the solar system. I feel like this is a similar progress report for the Advanced Script.

It's now been in flight 5 months and all systems are functioning well. No changes or course corrections are necessary. I made a cosmetic change to the first post by creating a section called 'Technical Documentation'. I also added a link to 'Low Light Tricks'. Other than that, all systems continue to be GO! Pdr, okay so the best output for coloring (gh2 movie shot for cinema) you would use dnxhd 175 or 8bit tiff sequence?

In both should i use in script conversion to RGB, YV12 or YUV or something else? Are there any quality loss or advantages in formats? If you are using AE, I would use v210. The reason is AE treats it as 10-bit when you use 16bpc or 32bpc (histogram doesn't get breakup or get quantized), even though the actual source data is 8-bit YUV. Also v210 is uncompressed (lossless, but huge filesizes) AE has poor chroma upsampling (it uses a point resize), so color edges look very blocky but upsampling from 4:2:2 isn't too bad in AE.

Upsampling from 4:2:0 in AE looks very bad. Using ffmpeg or avisynth to upsample to 4:4:4 (RGB) is even better, but there are no 10-bit RGB intermediates readily available yet through ffmpeg 8bit TIFF would be ok, but I would use PNG for slightly smaller filesize if you were going that route. AE treats it as 8-bit (so even in 32bpc you will get more histogram irregularities than v210). If you are only doing minor grading, it won't make a difference what you choose.

Another reason why you want to use a YUV intermediate like v210, is full range is preserved. So superbrights are retained in this workflow.

If you converted to RGB and didn't specify a full range conversion, this usually specifies a Rec601 conversion, you lose data on both ends 0-15, 236-255 Y' (in 8-bit values) i.e. You clip the data.

I have been lurking with a sort of glazed eye, slack-jawed admiration watching you gurus speak a language that is totally alien yet somehow familiar at the same time. What you have accomplished is literally nothing short of amazing despite the occasional naysayer which I have been forced to write of as curmudgeons in the mists.:Drogar-BigGrin(DBG) I am still not totally clear whether the magic script actually transfers the HDMI out to 4:2:2 but it doesn't really matter. A frame by frame comparison clearly shows a slight improvement in detail and a definite improvement in the shadows.

What I am wondering is will the Ninja guys be able to incorporate this magic in their recorder so any GH2 could do this on the fly. Once again, much love for your dedication and hard work.

The Rebel Alliance will prevail.What I am wondering is will the Ninja guys be able to incorporate this magic in their recorder so any GH2 could do this on the fly. In a nutshell, no. At one point, the people at Atomos contacted me to explore the possibility of incorporating my script into the Ninja.

The problem was the script requires AviSynth to run, and AviSynth requires Windows to run, and there simply isn't that type of computing power on board the Ninja. So, this must remain a post production process. As for it being obsolete, well, that depends on the individual. The slightly different gamma of the HDMI produces a picture that 'pops' more than the onboard recording.

In a nutshell, no. At one point, the people at Atomos contacted me to explore the possibility of incorporating my script into the Ninja.

The problem was the script requires AviSynth to run, and AviSynth requires Windows to run, and there simply isn't that type of computing power on board the Ninja. So, this must remain a post production process. As for it being obsolete, well, that depends on the individual. The slightly different gamma of the HDMI produces a picture that 'pops' more than the onboard recording. I'm curious if Atomos conisdered perhaps making it a function of Stripper 1.0 or its own stand aalone peice of software. Very professional information, I think it seems to be difficult, but I can learn, the material is worth to collect. Tahnk you very much.

Yes, it seems complicated at first, but once you get over the learning curve, it's a totally automated process that you don't have to think about at all. Plus, learning how to use Avisynth is a valuable skill in it's own right. There's a tremendous amount of useful scripts and functions that have been developed for it over the years. You can do things with Avisynth that simply can't be done anywhere else, on any platform. Can anyone confirm the HDMI signal that the PAL 1.1 firmware outputs on the new HBR 25p mode?

Is it 25p encapsulated into a clean 50i signal or does it have extraneous fields? Are the chroma channels clean or still unfixed? That's a good question. When I loaded the Firmware 1.1, I must have had a brain fart and did not test the new HBR modes through the HDMI.

Forgive me, but I don't want to reload it because when I went back to Firmware 1.0, all my custom settings got wiped out. Perhaps someone else can answer the question for you. Actually there are people using it, but yes, there is a lot of technical fear. As to whether it's still worth it, that's a question that can only be answered by how much of an insane perfectionist you are.

The gap between the HDMI picture and the hacked H264 is very small now, whereas once it was huge, as you know. Just the other day I was watching one of my comparison tests projected onto a 10 foot screen. Fine detail was basically equal, which is a major achievement for the hack. But yet, there were still differences. At high ISO's the noise in the HDMI picture was less intrusive than the H264. The HDMI picture 'pops' more because of it's slightly different gamma curve. And the HDMI picture has a certain feeling of solidness that the H264 doesn't quite have.

But is that worth it to go the HDMI route? Damned if I know. I'm just being your reporter.

For me personally, I'm more than happy with the hack. Nowadays I use the HDMI as a yardstick to see how far along we're coming. But I still maintain this thread for those few 'insane perfectionists' who may happen to come this way. My name is Santi. I´m from Spain and i´m new here. I have a GH2 and Ninja and i can´t use the avisynth with my videos.

They always give me errors. Can somebody explain step by step the process to transform a Prores ninja video with Virtual Dub? I´m sorry buy i can´t understand why i can´t use the virtual dub and the avisynth.

I made the steps of the first post but i can´t do nothing with my videos. The only thing that i can do is transform with the mpegstream clip. Thank you very much for all. Sorry for my english. My name is Santi.

I´m from Spain and i´m new here. I have a GH2 and Ninja and i can´t use the avisynth with my videos. They always give me errors. Can somebody explain step by step the process to transform a Prores ninja video with Virtual Dub? I´m sorry buy i can´t understand why i can´t use the virtual dub and the avisynth.

I made the steps of the first post but i can´t do nothing with my videos. The only thing that i can do is transform with the mpegstream clip.

Thank you very much for all. Sorry for my english.

I'll try to help you. Exactly what type of error are you getting? HDR mode with FIRMWARE 1.1 and HDMI I just tested the new HDR modes with HDMI output.

Here's the scoop: 30P is a disaster, hopelessly mangled up. Extra fields are added in, just like for 24P. However, there's good news for PAL users - 25P comes through cleanly - no extra fields. There is still the issue of the jaggy chroma in the red channel, even in 25P. So if you want a perfect picture, you would still need to run it through Avisynth. The script would consist of only the chroma fix. I suspect most people won't bother.

It's only really noticable in areas of heavily saturated red. The one exception is if you're doing greenscreen keying with the HDMI picture. Then you must use the chroma fix because the red jaggies affect the edges of flesh. Without the chroma fix, it's going to be very hard to pull a good key. NOTE: I added this information into the first post. Hi Ralph, Great work with putting together the solution for HDMI out recording. I'm currently working with an independent film group in Pre-Production for a Feature Film and we've looked at using the GH2 as we like the image quality, particularly using the hack/Driftwood settings.

Do you think using the HMDI out recording (then using Avisynth) yields a better image than using hack/Driftwood settings, particularly when thinking about how that will project onto a large theater screen? Also, do you think the workflow is rock solid - in other words, reliable for shooting important production footage? Lastly, I had looked at using the Hyperdeck Shuttle 2, as I believe that can record uncompressed. Would this work with the Hyperdeck Shuttle 2, and do you believe that would be even better image quality vs.

Thank you so much for your work and your response to my questions, Matt. Hi I have a question about the relative merits of this feature. I use my GH2 for astro imaging. There is a trick in astro in which images of planets (which are very small e.g.

Free tbc archives page 2 of 2 hacked for mac

100 pixels +/-) are created by stacking the frames from video cameras. (from 640 pixel webcams to dedicated 640 pixel ccd cameras shooting raw). The individual frames can look like mush but when several hundred are stacked in software the results can be amazing. The key aspect is to get clean uncompressed data. (see for dedicated camera spec). Smaller pixels of GH2 do offer some benefits and for info images are generally at iso160 to 320 - they are quite bright, the issue being atmospheric disturbance, you just need to find approx 400 images in a 3 or 4 minute window when the atmosphere is steady for a fraction of a second. So 24mins is plenty of video.

So the question I am asking myself is do you think the quality of the individual images in prores would be noticeably free from artefacts and detail loss (which is small and subtle) more so than one of the GH2 hacks? To be clear the final result here is a static image composed from information in several hundred video frames (which is then post processed in specialist software) not a video In terms of final image quality is there any difference between the ninja and the blackmagic or is it just implementation difference regards Steve PS In canon's live view is the preferred way of doing image stacking. To mattgh2 and billhinge: Both of you are asking whether the HDMI picture is better than the hacked picture. There is a difference, but it's subtle. The problem is each person has their own perception of what's acceptable.

Mine may not match yours, so there's no way I can say do it or don't do it. The best advice I can give you is to get a hold of an HDMI recording device or capture card, and run your own tests. The good news is you can record simultaneously in-camera and externally, so you can compare the two later.

@billhingeJust wondering why you would use video mode for astro pictures. Seems like you should shoot in still mode for the highest quality. And perhaps use the 40 frame burst mode for bright objects.

Not a silly question. Deep sky images like star fields do tend to contain stars and nebula which are generally faint and in the case of stars they are point objects. Here the name of the game is to take multiple single frames and stack them using clever software. Pro's would used peltier cooled ccd astrocameras but they are expensive, fortunately DSLR's hold up reasonably well except the normal canon/nikons usually have fairly severe internal UV/IR cut filters to maintain sharpness.

The GH2's internal filter is lazy being sensitive to some uv and ir which softens the image. Adding an external UV/IR cut with a narrower range sharpens the image (on GH2 - works on cameras with weak internal filters) and improves the colour, drawback is that a decent filter is expensive. Anyway, planets and the moon differ because they are much brighter, have a visible area rather than point image. Being bright they only need a short exposure e.g. Mars through a telescope could be 1/100s @iso160 fl=2800mm f10. The images are tiny though, typically jupiter will cover approx 100 pixels on the GH2, so surprisingly shooting at 640x480 produces a larger looking imageNow the problem is that the atmosphere causes the image to literally boil at such a high magnification e.g. Few hundred to even x1000.

But for say approx 1/100 of a second every second we may some useful info the rest being garbage as the atmosphere steadies briefly. If you shoot as many 640x480 frames in say 3 minutes as you can at the highest possible data rate without compression you just might get some useful data (individual still image will still look crap) but by using special stacking software you take say 400 to a thousand of the best and stack them and a useful still image appears. Then using other software such as deconvolution, wavelets etc to post process to bring out hidden detail, so really its collecting 'bits' of data and averaging not video as you would think of it(each data compression negatively affects the final result thats why dedicated planetary cameras would use a small 640x480 ccd)hope that makes sense:) here are some stacked images (split into raw images, rgb channels and combined). I use the GH2 with a Ninja. Works great (as I only shoot in 25p, being European), however, you still have chroma issues.

My problem is that I (like many other professionals) use a Mac. Right now I'm using this plugin: But would much prefer to have it in Premiere instead (anybody know if it could be ported easily) or even better. Does anybody have a solution to get the AVIsynth working on a Mac? I've tried WineBottler, but the software doesn't install properly it seems. Help much appreciated!