I want to share some battery tests I did with the two Zaxcom ZMT transmitters: ZMT4 and ZMT4-X.
The ZMT4 advertises “up to 7h” battery life, and uses NP-50 batteries (same as Lectrosonics SSM). The ZMT4-X advertises “up to 16h” battery life, and uses Motorola BT100 batteries (only used by some unusual Motorola walkies as far as I can tell).
I measured battery life by putting the transmitters into record, waiting for them to die, and then ingesting the recorded files. The length of the recording is the time I used for battery life. Transmitter power was set to 50mW. For realism, I tested with a lav connected (DPA 4660). I also tested two Countryman B6s to see if a different lav model affected battery life (but I screwed up the test, so keep reading). All the transmitters were running firmware v4.59.
I tested two NP-50 batteries: One from Fujifilm, and Lectrosonics’ custom LB-50. Both are manufactured by Panasonic, but come from different factories and have different UL codes. The Fujifilm battery is certified under UL code MH27866, which identifies the manufacturer as “PANASONIC CORPORATION ENERGY COMPANY LITHIUM-ION BATTERY BUSINESS UNIT”. Lectrosonics is certfified under UL code E341239 and manufactured by “Panasonic Energy (Suzhou) Co., Ltd”. Both are rated for the same electrical specifications: min. 940 mAh / 3.4 Wh and 3.6V / typ. 1000 mAh.
I bought the Fuji’s about a year ago from Studio Economik, but they were probably old stock; I don’t think Fuji has actively sold them for a couple years. They’ve been lightly used in the last year; I doubt they’ve lost much of their initial capacity yet. The Lectronsonics LB-50s were brand new about a month ago.
Sadly, although Zaxcom announced their own NP-50 Pro battery last month, I didn’t have any to try. It’s worth noting that Zaxcom specs their battery at 900mAh vs. 1,000mAh for both the Fujifilm and Lectro equivalents, so on paper Zaxcom’s battery probably loses 10%. It’s worth noting that Zaxcom lists “expected runtime” for the ZMT4 as “6 hours with a lav”, so they aren’t making the seven hour claim for their own battery. Zaxcom’s battery is about half the price of Lectro’s battery, and roughly equivalent to what I bought the Fujis for … but the Fujis no longer seem to be available.
The BT100 is Motorola part no PMNN4468 (B) (the final letter apparently denotes revision, so any final letter should be compatible). The batteries were brand new. There are no generic versions of this battery available as far as I can tell, so I don’t have any other comparison. There is no UL certification number on the battery, but it does list the following information: Cell origin: China, Manufacturer: Foxlink Automotive Technology (Kunshan) Co, Ltd., 3.8V / typ. 2300mAh. Simple math converts that into 8.74 Wh. That is 2.4x the size of the NP-50, which roughly corresponds to the ratio in claimed battery life: The ZMT4-X’s 15h battery life spec is 2.3x the ZMT4’s 7h spec.
All batteries were fully charged, but had sat for about a week, so not a fresh charge.
Results
Tx Type
Battery Type
Lav Type
Recording time
Zaxcom ZMT4
Fujifilm NP-50
Countryman B6
5:41
Zaxcom ZMT4
Fujifilm NP-50
Countryman B6
5:47
Zaxcom ZMT4
Lectrosonics LB-50
DPA 4660
6:43
Zaxcom ZMT4
Lectrosonics LB-50
DPA 4660
6:47
Zaxcom ZMT4-X
Motorola PMNN4468B
DPA 4660
15:49
Zaxcom ZMT4-X
Motorola PMNN4468B
DPA 4660
15:00
Observations
Under ideal circumstances, the batteries do get pretty close to Zaxcom’s advertised runtime, which is great. Both the ZMT4 and ZMT4-X came within 15 minutes of their advertised runtime — on the best case result. For the ZMT4, it was within 3.1% of advertised runtime, for the ZMT4-X was within 1.1%.
Here’s the part that I screwed up: I inadvertently paired the two DPA4660s with the Lectrosonics LB-50s and the two B6s with the Fujifilm NP-50s. And the B6 / Fujifilm combo gave almost exactly an hour less recording time than the DPA4660 / Lectrosonics combo. Looking at the data alone, I can’t tell if Lectro’s batteries are fundamentally better than Fuji’s, if B6s are significantly more power hungry than DPAs, or a combination of both. I suspect that Lectro’s battery is the major difference (my guess is they have an advantage in both age and manufacturing quality), but I can’t prove it.
The NP-50 combos both performed similarly, but there was a pretty significant difference with the Motorola batteries: 15h even vs. 15:49. In the grand scheme of things, that’s a 5% difference, which is probably not an unreasonable margin of error for the test, but it does mean you should probably be conservative when estimating runtimes (mostly based on variability in both cell quality and difference between different charge cycles). I would definitely stick with the lower result when looking for practical runtimes (and this is where Zaxcom’s “up to” specification gets a bit suspect for me). One other quirk of the ZMT4-X: The battery telemetry doesn’t work super well. Both my test units showed roughly 50% battery life well into the thirteenth hour of operation. Presumably they fell off quickly after that, but I wasn’t watching closely.
>6h battery life on a tiny ZMT4 means one battery change a day at lunch in most cases. >15h on the ZMT4-X (roughly the size of a Lectro SMV) is phenomenal. Practically speaking, it means no changes during the day. If it dies, you should be well into triple-time on your paycheque.
Size Comparison: The case on the left is for a Lectrosonics SMVI tried putting the ZMT4-X in the case; it was just slightly too thick
Historic Test
One other point of reference: I previously did a battery test with only the Fujifilm batteries. But I did it with 8 different batteries, and I did it when I first got the batteries (i.e. they had zero charge cycles at the time). The test methodology was slightly different as well: I tested with no lav connected. So, this test isn’t useful for comparing batteries, but it is an excellent way to gauge a confidence interval because I have a sample size of 8.
Tx Type
Battery Type
Lav Type
Recording time
Zaxcom ZMT4
Fujifilm NP-50
None
5:50
Zaxcom ZMT4
Fujifilm NP-50
None
6:27
Zaxcom ZMT4
Fujifilm NP-50
None
6:15
Zaxcom ZMT4
Fujifilm NP-50
None
6:13
Zaxcom ZMT4
Fujifilm NP-50
None
6:18
Zaxcom ZMT4
Fujifilm NP-50
None
6:37
Zaxcom ZMT4
Fujifilm NP-50
None
6:15
Zaxcom ZMT4
Fujifilm NP-50
None
6:12
What I get from this is that the original Fujifilm test was mostly clustered around 6:15 runtime with a couple outliers in both directions. That’s about 30 minutes more than the more recent test. What does that mean? Hard to say. It could mean the Fujis perform better on a recent charge (the original test was fresh off the charger; the more recent test they had sat for a week). It could mean the Fuji’s have lost of bit of their capacity as they’ve aged in the last year. And it could mean the B6 really is a bit power hungry.
What does seem likely is that the Lectrosonics LB-50 really is a better battery than the Fuji. The LB-50 tests were done at a similar stage in battery life (i.e. new), and had the dual disadvantages of having sat for a week and having to power a DPA4660 lav. Despite those disadvantages, the LB-50 lasted about 30 minutes longer.
Conclusion
That’s probably way more words than needed to be written about these batteries. The TL;DR summary is this: Zaxcom’s advertised runtimes are pretty close as a best case, but you’d probably be wise to knock an hour off each if you want a realistic average case. In other words: Expect 6h from the ZMT4, and 15h from the ZMT4-X. I’m still shocked at 15h of runtime on a transmitter. That’s effectively unlimited for normal shoot days.
I’d say there’s some evidence that the Lectrosonics LB-50 does offer better performance than the much more common Fuji battery. Whether that extra half an hour is worth paying almost double for is up to you. I’d guess that for some people it might be: reliably getting over 6h is the difference between never having to change batteries before lunch and potentially having to scramble for last minute battery changes right before lunch. If production is paying for batteries as expendibles (which they should be), buying the Lectro batteries will make you a more reliable mixer.
Rotary knobs suck for entering text. I got pretty fast at using Zaxcom’s on-screen keyboard on my Nomad (and now, my Nova), but it’s not the same as a real keyboard. I have developed the very specific skill of being able to simultaneously walk quickly and write scene notes or track names on my recorder as we hustle towards our next shot. It is possible to use a keyboard with the Nova (and the Nomad, with an asterisk or two), but I could never figure out how to make it bag-friendly. A keyboard seemed like it would add awkward bulk and weight, and the trade-off never seemed worthwhile: Doing docs, I value mobility and the ability to keep up with the action over most other priorities.
I started seriously looking at keyboards when I bought my Nova, but even “mini” keyboards built for tablets and phones are too big for a bag. There’s an ergonomic reason for this. A keyboard can only be so small before it stops being possible to type on it with ease, and most mini-keyboards are intended to be an upgrade over the on-screen keyboard of a touch-screen device. There’s not much point in a keyboard that is so too small for your fingers to fit on comfortably — tablet keyboards are already too small. But I’m not trying to improve on a touchscreen keyboard. My baseline is much lower: The rotary-knob keyboard on the Nova means typing each letter takes a second or two. I don’t care about touch typing; I just want to enter a few words here and there without it needing my full attention.
There’s a small market niche for thumb-keyboards that seems to be aimed at gamers or home theatre devices. Perfect! A lot of these are generic brands that troll the depths of Amazon marketplace and all come from the same factory, but there is one brand that does seem prominent, if that word can be applied to such an obscure niche: Rii. Most of their keyboards seem to be aimed at gamers or home theatres, and they come with lots of unnecessary bells and whistles like track pads and joysticks. But, they have one, very plain, very small model that isn’t even listed in their product index. I give you the Rii 518BT Bluetooth Keyboard.
The smallest keyboard on the internet.
This is literally the smallest keyboard I could find on the internet. It is a little under 11cm wide, and it fits in the palm of my hand. More importantly, it could fit on my sound bag without getting in the way. Just one problem. The 518BT Bluetooth Keyboard uses … bluetooth. The Nova needs a custom-built cable and an obscure menu option just to handle USB. Bluetooth? It doesn’t do it.
Spot the keyboard. This is the level of unobtrusiveness that I was looking for.
I gave up. Putting a bluetooth device next to my precious receivers was probably a bad idea anyway. For about six months, I put the idea out of my head, until I read about someone else using a wireless keyboard with the Nova. Not in a bag, mind you, but it got me thinking. Most wireless keyboards use 2.4GHz dongles but show up as regular USB devices. Surely I could find a bluetooth dongle that could do the same thing? Surely some obscure Chinese manufacturer would be filling this equally obscure product niche somewhere on AliExpress? No such luck. It doesn’t exist.
An aside: Most devices that can plug into a computer provides the bare minimum amount of hardware that is needed to convert some sort of input or output into digital bits. The rest is software. Most of bluetooth is software; the “bluetooth” in a bluetooth dongle is mostly just an antenna and a receiver chip that can turn the raw RF that the antenna picks into bits. The dongle feeds those bits to the operating system, which does the rest of the bluetooth magic in software. If your operating system doesn’t support bluetooth (and the Nova’s operating system definitely does not), most bluetooth dongles don’t actually add bluetooth. It is possible to implement bluetooth in hardware; that’s more or less what the HID proxy part of the bluetooth spec does: It provides a way for a bluetooth dongle to output a USB signal (a very specific subset of USB called HID) instead of a partly-bluetooth stream of bits that needs software to interpret it. The reason why no HID proxy devices exist is because the vast majority of bluetooth dongles are intended to be plugged into devices that do have software that supports bluetooth, so why add hardware to do a job that works perfectly well in software?
Unfortunately, knowing why an HID proxy dongle doesn’t exist doesn’t get me any closer to making it exist. Or does it? It turns out most of the people interested in an obscure bluetooth capability like HID proxy are hardware developers who want to use bluetooth for their own bits of hardware, hardware that doesn’t have software support for bluetooth. So, when I searched “HID Proxy” looking for a dongle for purchase, what I got was a whole bunch of developers talking about how they were building their own bluetooth dongles. Uh oh. My inner tinkerer was tingling.
When I say small, I mean small. That bright pink box tucked behind my Nova is a computer.
In practice — let’s just say the problem with tinkerers is they never finish tinkering, so nothing is ever quite done. The Pi (specifically, the Raspberry Pi Zero W) has a lot of things going for it. It has bluetooth and USB. It can be powered by its USB port, which means I should be able to plug it in to the Nova without running a separate power cable to my BDS. It consumes as little as 0.4W which is more than I would like just for a keyboard, but not so much that it seriously damages my battery life. And it’s small and light enough to fit in my bag.
I’ll spare you the painful tinkering details of getting the software up and running. Suffice to say, it worked in the open-source, Linux sense of “worked”, meaning it technically did was it said, but required a bit of troubleshooting and a fair amount of arcane knowledge of how software and Linux work to get it up and running. I have to say, I was pretty impressed when I plugged it into my non-bluetooth capable desktop and was immediately able to start typing on the Rii keyboard.
Now I needed to plug it into my Nova. In the strange logic that only makes sense to Zaxcom, the Nova has what looks like a USB port, but which outputs an old-fashioned RS422 serial signal. We do things like that in audio; we like running unusual signals through familiar connectors (I’m looking at you, Dante). In an added twist, the Nova can switch the signal that is output through the USB port to use USB signally … but only with a custom cable that swaps a few pins and adds a couple resistors. I didn’t design it.
One more thing: I own the outboard fader panel for the Nova — the FP7 — which is already plugged into the USB, <ahem> serial port on the Nova. Thankfully, this is a solution, not a problem, because the FP7 has a genuine, honest-to-God USB port on the back that is specifically designed to pass a USB keyboard signal through to the Nova. It probably helps that the FP7 is, in fact, a different flavour of Raspberry Pi under the hood, so the USB port probably comes free with the processor. For those keeping score, that means that the signal chain for connecting the keyboard goes through two separate Raspberry Pis before it gets to the Nova.
This kit contains two Raspberry Pis.
Plugging the Pi into the FP7 produced … nothing. Actually, it produced a python exception in the log of the Raspberry Pi, but I’m sparing you those details. I had two issues. One, I didn’t get anything unless the Pi was running its special relay software before I plugged it into the FP7 (and the FP7 also needed to be freshly booted). This meant my plan of powering the Pi from the FP7 wasn’t going to work. Two, most of the keys didn’t work … which I eventually realized was because the Nova acted as though CTRL was permanently pressed, which gave me a grand total of six working hotkeys: CTRL+R, CTRL+S, CTRL+P, CTRL+T, CTRL+V, CTRL+C, corresponding to Record, Stop, Play, Enable Tone, Toggle Slate Mic, Toggle Comms Mic respectively. Those functions already have prominent, dedicated buttons on the Nova, so I wasn’t interested in using my keyboard just for that.
Thank goodness I like to tinker, because fixing #2 meant firing up a debugger to troubleshoot the relay software and digging into the technical details of the HID spec to find out exactly what data the Pi was sending and what the Nova was expecting to receive. For the record, CTRL-R looks like x01/x00/x15/x00/x00/x00/x00/x00 when the Nova receives it. It turns out, there are two specifications for how a USB keyboard can send keystrokes, the default called “report protocol”, and a much less flexible one called “boot protocol”, which is designed for situations where the bare minimum functionality is needed. Guess which one the Nova uses? I won’t pretend I fully understand the difference, but thankfully full understanding wasn’t needed for me to hack at the code enough to get it working.
I can’t tell you how gratifying it was to turn on my keyboard and suddenly see my keystrokes coming through. I was unreasonably happy that CAPSLOCK worked as expected; it felt like I’d added a new capability to my mixer. I’m sure a significant part of the gratification was how difficult it had been to actually get working. But I’m also proud of being able to do something that is definitely not supported by Zaxcom: connecting a bluetooth keyboard to the mixer, and perhaps helping ensure my safety by making it so my very specific skill of entering text by rotary knob while walking isn’t something I need to rely on.
My mixer keyboard mounted with velco to the front of my bag, right where I need it. Small, light, and convenient.
Time will tell if this is actually a useful addition to my bag. I still haven’t solved the issue of needing the Raspberry Pi to be booted before I plug it in, and that is actually a serious impediment. At the moment, this is the process for turning the keyboard on:
Plug the Raspberry Pi into a USB power supply (which means I need to add this to my BDS (Battery Distribution System) … and I also need to buy / build a BDS, since my bag currently only needs to power three devices (the Nova, the FP7, and a TRXCL5, powered directly from a pair of dual-output battery shoes).
Turn on the keyboard (a toggle switch).
Wait 45-60 seconds for the Pi to boot.
Press keys on the keyboard every 15 seconds to check if the Pi is booted (the light on the keyboard is the only indication of when the Pi is ready).
Plug the Pi into FP7.
Disconnect the Pi from the USB power supply (optional).
I’m not sure if that process will be too much trouble for the benefit of having the keyboard. I have a feeling I will end up just not bothering a lot of the time. At the moment, I’m fairly sure that fixing it so that the keyboard powers up with the FP7 needs to be done in the FP7 firmware, which is a bit beyond my hacking abilities. Powering up the Pi from the FP7 not only prevents the keyboard from being recognized, it also prevents the FP7 from recognizing a regular keyboard until the FP7 is power cycled, so I think I may be freezing the USB driver in the FP7.
Another possibility I haven’t explored (because I’m not hopeful) is powering the FP7 from the Raspberry Pi. I noticed this while trying to reboot the FP7 during testing: If I had the Pi powered from the wall while it was plugged in to the FP7, pulling the power on the FP7 didn’t reboot it because the power source would fail over to the Raspberry Pi. Just another quirk of the FP7 also being a Raspberry Pi under the hood.
I feel like I’ve proven something here. I proved it is possible to use a Bluetooth keyboard with the Nova, and that seems worthwhile, because there are no other suitable keyboards for a sound other than the Rii 518BT. I’m very happy with it. The keyboard itself cost me CAD$20, and I paid CAD$40 for the Pi with its case from someone on Craigslist (this is slightly more than the retail value, but it was cheaper than shipping it). And, now that I have a computer plugged in to my mixer, I’m wondering what else I can do with it? I’ve been thinking for a while that I need an automated card transfer script for amalgamating recordings from all my transmitters with the mirror card on the Nova. Maybe that will be my next project…
For ages, I’ve been frustrated by the lack of real-world information about the very expensive tools we use. Before I got into sound, I used to do reviews of computer hardware, and it was unthinkable to me that I would make a purchase without knowing what I was buying. In film sound … there just aren’t very many people doing what we do, so we tend to make our decisions based on what everyone else is using. Not many of us get to do comparison tests before we lay down a large amount of money, especially outside the major film centres.
I’ve been on a mic acquisition kick lately, and I’ve ended up with too many shotgun mics, so I need to figure out which ones I want to keep as my workhorses. To help me choose, I decided to test to them all under controlled circumstances and see what I could learn. These are the mics I tested:
Sennheiser MKH-60
My workhorse for years. I originally picked it up because it was cheap (used) and available, and because I had enough other Sennheiser gear that I thought I could trust it. I’ve been happy with it, but I’ve always had an inferiority complex about it because it’s not what most other mixers in my area use (see the next two choices). Not a very common choice these days, especially since the MKH-8060 was released. The MKH series has perhaps the lowest self-noise available due to their RF condenser design, and is well known to be durable in all sorts of weather and a wide range of temperatures.
Schoeps CMIT-5U
If there’s one mic that screams professional sound mixer, it’s the CMIT. The Schoeps name has clout, and the CMIT is standard. I know many mixers who use them indoors as well as out, although most seem to have switched to the MiniCMIT due to its smaller size. It has a reputation for not doing well in humidity, though their latest “Generation D” capsule has supposedly fixed this. I don’t have that version.
Sanken CS-3e
I had to borrow this one. In my area, this is even more common than the CMIT for outdoor use (I think most mixers I know have both), due to its multi-capsule, noise-cancelling design. It has a reputation for being a “laser”, and being capable of very high noise-rejection possible without sacrificing on-axis fidelity.
Schoeps SuperCMIT-2U
The CMIT … but with noise cancellation. A different design from the CS-3e, but the same basic idea: Use a second capsule to reject off-axis sound through noise cancellation. It is even more eye-wateringly expensive than the regular CMIT, and is awkward to use because it outputs AES42 digital audio rather than analogue, and it requires 10V digital phantom power, which is not interchangeable with regular 48V phantom power. It uses the same capsule and interference tube as the regular CMIT, and it outputs an unprocessed signal in addition to the noise-reduced version, so in theory you get a CMIT part and parcel with the noise cancellation. I really want to see if that is true.
One warning: My SuperCMIT is not in pristine condition, and it has a 1.5kHz pure tone in the noise floor of both the processed and unprocessed channels that is audible under very quiet conditions. I didn’t hear it in any of the test recordings for this shootout, but Schoeps has confirmed that this isn’t normal and I will be sending it in for service.
There are several other comparable mics I would have a liked to have tested, but don’t have ready access to. Maybe in the future: DPA 4017, Neumann KMR81i, Schoeps MiniCMIT, and Sennheiser MKH-8060. I would also have liked to have tested a Røde NTG-something (don’t know their lineup that well), Rycote’s HC-22, and perhaps the DPA 2017. These are cheaper, “lesser” microphones, and I would like to find out if the more expensive crop is really worth the money. Next time… Last, I wish I still had a working Sennheiser MKH-416, purely because it’s such a common, iconic point of reference.
Methodology
I tested all the mics together, bringing them as close as I could to a single point without creating too much of an acoustic shadow. I ended up mounting them all on a single C-Stand, angled just slightly down at eye level, roughly 10 centimetres apart, and offset diagonally so that the horizontal and vertical planes were clear for all mics. This isn’t especially realistic, since standard boom position is a much steeper angle towards the ground, but it made it easier to test off-axis noise, as I could walk around behind them and be reasonably close to 180° off axis. I also turned off all filters (high-pass and low-pass) on both the microphones and the mixer, and I tested them bare, without foam or fur (also not realistic). I recorded my tests with all mics recording simultaneously in a Zaxcom Nova2. A special note for the SuperCMIT: I recorded both the “unprocessed” channel, and the “processed” channel with “preset 1”, which is the lower-strength noise reduction recommended for regular use. I did not test “preset 2”, which is much more intense, and which Schoeps warns may contain audible artifacts (I couldn’t record both simultaneously, so I picked the one more likely to be more commonly used).
My basic audio source was my own speaking voice, recorded on-axis, and at 90° and 180° off-axis, all at a distance of roughly six feet (about as far as you’d want to get under ideal circumstances). I did the test in my living room (smallish, about 4 by 6 metres), and again outdoors in a quiet neighbourhood. In both locations, I repeated the test with an off-axis noise source (a speaker playing music) as a way to gauge how effectively the mics rejected noise. In the outdoor location, I was able to opportunistically capture some common problems: A siren, some crows, several car passes, and an airplane.
I did my best to align the recording levels of all the mics before testing, which is harder than it should be. I used my speaking voice on-axis as the test signal, and then confirmed with a 1kHz tone played from my laptop speakers. I think I got pretty close (let’s say within a dB) except for the CMIT indoors, which was 2-3dB too hot; I corrected this when I moved outdoors. It’s not super repeatable; even the 1kHz tone fluttered the levels up and down by a couple dB, but the recordings are pretty consistent when switching between tracks.
Recordings
The recordings BWF files direct from the recorder, unedited except for some file-naming. There is metadata to identify tracks, but if you are using software that can’t read it, here is the track order:
Schoeps SuperCMIT, unprocessed
Schoeps SuperCMIT, with noise reduction (preset 1)
Schoeps CMIT-5U ***This track was about 2-3dB too hot for the indoor comparisons.***
Sanken CS-3e
Sennheiser MKH-60
Unfortunately, I couldn’t find a suitable wordpress plugin that was capable of soloing individual channels (or even that would play BWF files), so you’ll have to download the recording and listen to them them in your favourite DAW.
Indoor Results
First, some general listening notes on how the mics compared on-axis. The unprocessed SuperCMIT and the CS-3e both sounded the most neutral to my ear, at least as far as direct sound was concerned. Tonally, the CMIT and the MKH60 sounded identical; I couldn’t tell them apart when I switched between them. To me, they sounded “better”, but also slightly coloured: Both had a bit more low end, and I liked this, but I think the SuperCMIT and CS-3e are probably more accurate. That’s my subjective opinion, so take it with a grain of salt, and you can measure the recordings if you want more precise analysis. The processed channel of the SuperCMIT sounded distinctly thin to me. And, both channels of the SuperCMIT sounded noticeably phasey or indistinct; all the other mics sounded much clearer.
I could tell immediately that, despite having the same capsule and interference tube, the CMIT and SuperCMIT’s “unprocessed channel” do not match well; they sound quite different. They are different tonally, they have different noise floors, but the biggest difference is the metallic phasiness of the SuperCMIT. I hypothesize that this comes from the additional open ports for the noise-cancellation capsule which are behind the primary capsule, and which likely change the back-pressure on the main capsule and allow an additional route for audio to reach the capsule other than through the interference tube. The noise floor on the unprocessed channel of the SuperCMIT is noticeably higher than the regular CMIT, and the processed channel is higher still (probably due to noise-cancellation capsule being in the signal chain). I hypothesize that the built-in pre-amp inside the SuperCMIT is of lesser quality than my Nova2; this can’t be changed, since the pre-amp and conversion to digital are inherent parts of how the SuperCMIT works.
On the topic of self-noise, the CS-3e was also quite noisy. Perhaps this is because, like the SuperCMIT, it has multiple capsules? The regular CMIT’s noise floor was audible, but very quiet, and I couldn’t hear the MKH-60’s noise floor at all (I’ve heard it when recording in the arctic, but rarely anywhere else).
None of the mics showed any difference in the level of the ambience in the room. I deliberately left the room quite noisy; I have a running computer right next to the mics, and the fridge is on in the background. The ambience was within a decibel on all the recordings except the CMIT, but that is because my level alignment in the CMIT was poor; the speech levels were commensurately higher than the rest of the mics, and when that is corrected for, the difference in ambience disappears. Even the noise-cancelling SuperCMIT and CS-3e did not seem to cancel any room noise, though both mics seem to emphasize a pure tone in the room noise that isn’t as noticeable in the other mics. My hypothesis is that HVAC noise is largely a diffuse noise source once it bounces around in a room, so the noise cancellation isn’t effective. Long story short, don’ t use interference tubes indoors, they don’t help. But you knew that already…
Off-axis, the mics showed bigger differences. Schoeps is known for having excellent, uncoloured off-axis performance, and that was certainly true of the CMIT. It loses a bit of high end, but unless you are listening for it, it sounds remarkably similar to the on-axis sound, just quieter. The unprocessed SuperCMIT was similar (in general, aside from the differences already stated above, the unprocessed SuperCMIT did behave similarly to the CMIT in most respects). The processed SuperCMIT sounded even more neutral off-axis than the CMIT. Off-axis, it was quieter but very close to the on-axis sound. On the other hand, it sounded thin both on- and off-axis, so even though it was more consistent, I preferred the sound of the regular CMIT.
Neither the CS-3e and MKH-60 could match the off-axis neutrality of the Schoeps mics, though their errors tended in opposite directions. The MKH-60 is a traditional interference-tube shotgun, and it sounds like it. Its directionality is primarily at higher frequencies; off-axis it sounds muffled and dark. Despite that, I was surprised how audible off-axis speech was, even at 180° off-axis. You can clearly understand what I’m saying; you could use it if you had to. With music playing in the background, this changed a bit; the MKH-60 was the least intelligible against the background noise. The CS-3e was the opposite: It’s off-axis sounds tinny, emphasizing treble and losing a lot of bass. Presumably this due to its line-array cancellation; it certainly doesn’t behave like a conventional shotgun mic. It also sounds distinctly coloured in more ways than just emphasizing treble. Off-axis, it sounded the least natural of all the mics. However, with the exception of the noise-cancelled SuperCMIT, it maintained the best intelligibility for the on-axis speech while music was playing off-axis.
One thing that surprised me was that there was almost no difference in the amount of rejection between the microphones. The processed SuperCMIT rejected slightly more noise than the others, but only by a decibel or two. In my opinion, this isn’t enough to matter, especially taking into account how much less natural the on-axis audio was. The noise-cancellation of the CS-3e made no difference whatsoever to the levels, though, as noted, it did help with intelligibility.
Overall, I felt the CMIT had the best overall balance between sounding natural and maintaining intelligibility when the background noise was competing with the on-axis speech. But, the differences between all of the mics were small. So small, that I realized I had made a mistake testing them indoors, and I needed to test them outdoors if I wanted to understand their differences. In reality, none of these mics are the right choice for indoor use.
Outdoor Results
For on-axis sound, the outdoor location reduced the differences I had heard indoors. All the same comments apply: The SuperCMIT and CS-3e sounded most neutral; the CMIT and MKH-60 sounded similar to each other, but with slightly more bass than the others, and the SuperCMIT seemed to have a slight echo. This echo was much reduced compared to the indoor test, and it seemed inconsistent; sometimes it disappeared entirely. My guess is that it is sensitive to near-field reflections, so I think it’s at its best with no hard surfaces around. Overall, whatever differences existed in on-axis sound were barely noticeable in outdoor conditions.
I made one lucky recording that did highlight some differences: While I was setting up, there was a flock of crows in the distance, cawing and making a racket. They were roughly on-axis, but very distant — probably half a kilometre away. The different ways the mics captured the reverb tail from the crows made a really big difference in how I perceived the sound. At one end of the spectrum, the SuperCMIT made the crows sound much nearer than they were: The noise cancellation was quite effective at getting rid of the ambient wind and traffic noise, as well as the echoes that helped me identify where I was. It produced a very dry recording with minimal background noise that would be perfect for a sound library. The CS-3e was not effective at removing the backgound noise, but it did get rid of the reverb tail that illustrated the sound of the space. The unprocessed SuperCMIT sounded fairly similar to the CS-3e. The CMIT and the MKH-60, on the other hand, preserved the reverb tail and the sense of space and distance. I could tell I was outdoors, and I could tell that the crows were far away. The MKH-60 was better than the CMIT in this regard: It produced the most realistic recording, meaning it was the closest to what I heard when I took off my headphones and listened to reality. Its low noise floor also helped the sense of realism.
The crows helped me identify a property of sound that is really hard to describe or quantify: I used the word realistic just now; transparent is another word. What I’m getting at is how faithfully the mic is able to reproduce the space that it is in. That comes down to the detail and clarity in the recording. I identified that realism with the ability to hear the reverb tail of the crow calls. On this basis, I was able to rank the mics from most to least realistic: MKH-60, CMIT, SuperCMIT (unprocessed), CS-3e, SuperCMIT (processed). Or, with a different recording priority, I could reverse the ranking according to which created the best recordings for library effects (best being dry, with minimal background).
I had a couple opportunities to test some common sound issues: A siren passing by in the distance, and a plane overhead. The noise-cancelled SuperCMIT was mildly useful at reducing both of these noises. The CS-3e was even milder. They were very slightly less intrusive underneath the on-axis speech, but overall I wouldn’t say they made a big enough difference to care. If I was recording in those circumstances, I would have noted the disruption at roughly the same point with all the mics. The SuperCMIT bought at most a second or two. That’s not nothing, but it’s not worth the sacrifice in realism compared to the regular MKH-60 in my opinion.
As with the indoor tests, the biggest differences showed up when comparing off-axis behaviour. I’ll go one by one, since every mic showed markedly different characteristics. As before, I did two tests: One walking around the mic well speaking as a way to judge off-axis the frequency response, and one with music playing at 150° off-axis while I was speaking on-axis, as a way of judging rejection.
The noise-reduced SuperCMIT stood out as having the most natural off-axis sound, especially beyond 90° off-axis. It tied for second best for the amount of rejection, but the sound was clear; at 180°, I could understand what I was saying perfectly when there was no competing audio on-axis. It also reduced the level of the ambience, so my on-axis voice did benefit from the off-axis rejection of the music. If I needed to hear a voice over the surroundings, but I needed the surroundings to be audible but quieter under the main subject, this was the clear winner.
The CS-3e had by far the strongest noise rejection at 180°, bringing my speaking voice down to the level of the background noise. My voice sounded thin from this angle, and was difficult to distinguish from the ambient noise. This is significant difference from the SuperCMIT, because the CS-3e did not reduce the overall ambience in the same way, so it did a better job of burying the off-axis sound in the ambience. The off-axis frequency response is terrible, the off-axis music sounded thin, unbalanced and phasey. Most of the rejection seemed to be in the low frequencies. But it did provide the best separation from the on-axis sound.
The CMIT maintained it’s reputation for rolling off neutrally off-axis up to about 90° off axis, but it performed quite badly beyond 90°. It rejected less noise than all the other mics, and the off-axis music was very muddy and muffled, to the point of being incomprehensible. The off-axis colouration didn’t sound as ugly as the CS-3e, but it was made worse by the fact that it was so much louder than the other mics. This was my least favourite choice for trying to isolate the on-axis sound in a live environment.
The unprocessed SuperCMIT gets special mention for being almost as clear off-axis as the processed version, but with the same poor rejection as the regular CMIT. It was interesting how much of a hybrid between the two it was. It was never a mix of the two mics, each characteristic took after either one or the other. Unfortunately, the lack of good rejection makes it a poor choice for booming in a crowd, but at least it doesn’t make the crowd sound unnatural.
The MKH-60 had the worst rejection at 180°, but was second-best to the processed SuperCMIT for sounding natural in that position. As with the indoor tests, it sounds a bit muffled when off-axis, but unlike the CMIT, the muffling is gradual and consistent all around the mic. It has a narrower sweet spot than the CMIT, but beyond 90°, it sounded better overall. Speech was still comprehensible; it was second only to the SuperCMIT for sounding natural off-axis. The MKH-60 had the most noticeable rear lobe, so it’s rejection at 180° was not good, but at 150° its isolation was on par with the SuperCMIT. This was the only mic where I really noticed the unevenness of the rejection pattern. I would have expected the CMIT to share this rear lobe, but, perhaps its rear rejection was so poor overall that I didn’t notice.
I have one final comparison that ties everything together: The sound of a car passing by while someone is speaking on-axis. This happened a couple times organically, and I also staged a test by driving by while music was playing on-axis. The test started with me starting my car in the background (at about 150° off-axis, 20 metres away), and it was the startup that illustrated the biggest differences in my opinion. Not surprisingly, the processed SuperCMIT had the best noise rejection, and the lowest ambience overall. However, because it wasn’t masked by the ambience, the engine startup was still audible and distracting, even though it was fairly quiet in the background. For the same reason, the road noise stayed audible longer than it might have with a higher ambience level. The CS-3e was even worse. Despite its ability to cancel noise, its off-axis frequency response made the car’s engine more noticeable and distracting. Out of all the mics, the engine startup and drive by were most clearly identifiable and distinct on the SC-3e. The remaining three samples, CMIT, unprocessed SuperCMIT, and MKH-60 were all fairly similar, and all preferable to the two noise-cancelling mics. They de-emphasized the tonality of the engine, and most of what could be heard was broadband road noise that was less distracting than hearing the engine clearly. Of the three, I thought the MKH-60 was the most balanced. I could hear a touch more tonality in the engine, but the road noise faded into the background more quickly. Even though it doesn’t have the best rejection or the best off-axis tonality, it did the best job of hiding the off-axis sound in the background, which is ultimately what I want from a shotgun mic.
Analysis
These listening tests made me re-think a lot of what I knew about what makes a good shotgun boom mic. I’ve always assumed it was pretty simple: The more off-axis rejection the better, and the off-axis roll-off should be as neutral-sounding as possible. What I’ve realized is that these are just two of many competing priorities, and favouring just those two compromises some of the other priorities.
Let me spell out some of these priorities: Isolation (the amount of off-axis rejection, measured in dB), focus (the perceptual ability to distinguish the on-axis audio from the surroundings), background suppression (how effectively the mic rejects noise from diffuse sources, i.e. ambience or room tone), reverb suppression (how dry the on-axis sound is), ability to reject a point source, realism (how natural the audio sounds), fidelity (how closely the microphone reproduces what the ear actually hears), and neutrality (frequency response, or the degree to which the microphone colours the sound). That’s a long list, so bear with me as I try to explain how they relate and why they matter.
Broadly speaking, I think there are two different approaches to recording dialogue, which I will call drama vs. documentary. For drama, the goal is to get the absolute cleanest tracks under the assumption that the recordings are just small pieces in a much larger sound design. They need have the best signal-to-noise ratio possible (isolation and background suppression), an uncoloured frequency response (neutrality), and as little reverb as possible (reverb suppression). For documentary, the goal is to get tracks that match what the camera sees (fidelity) under the assumption that the track may end up playing in the final mix more-or-less as recorded (realism), without compromising the ability to follow the action (focus). These aren’t necessarily cut-and-dried; not every drama needs to be heavily sound designed, and not every documentary needs to be verité. But they are creative choices that we make as recordists, ideally while keeping a particular post workflow in mind.
I can place the four microphones on a fairly linear spectrum between these two approaches. This is the same ranking that I discovered when listening to the crows: SuperCMIT, CS-3e, CMIT, and MKH-60. I’ll sum up my thoughts in that order.
SuperCMIT
The noise cancellation is real and effective. And it isn’t nearly as desirable as I thought it would be, because it compromises realism so much. When I purchased the SuperCMIT, I assumed (with some help from Schoeps’ marketing) that the unprocessed channel would be a close substitute for a regular CMIT. My hope was that I would get the benefits of a regular CMIT, and the bonus of noise-cancellation when I needed it. This is absolutely not the case; despite having an identical capsule and interference tube, the unprocessed SuperCMIT is a weird mix between the CMIT and the noise-cancelled channel. And, because of the phase-shifts inherent in the noise-cancellation, the two channels can’t even be mixed. I can’t see too many scenarios where I would deliberately choose to use the unprocessed SuperCMIT, but it’s a nice safety against too much noise-cancellation.
So, I had to reconsider what I wanted from the mic. Noise-cancellation is the reason this mic exists, and it is absolutely unique in that regard. The SuperCMIT stood out for the background suppression it offered: It rejects diffuse noise as well as off-axis noise, and that means it appreciably lowers the signal-to-ambience ratio. I say signal-to-ambience instead of signal-to-noise because not infrequently it removes so much ambience that only the SuperCMIT’s self-noise is left. Reading anecdotally, I found a number of complaints that the SuperCMIT is quite a noisy mic. Schoeps’ published specs barely support this; they give three different noise specs, which are only 1-3dBA above the same specs in the CMIT. This should be barely noticeable, but perceptually it seems much noisier because the noise cancellation pushes the ambience and off-axis sounds much closer to the noise floor.
One caveat: The background suppression only seemed to work in open spaces; it did not reduce background noise indoors. Even worse, indoor use created a strange double-image that made the on-axis sound phasey and indistinct. I don’t think I would want to use the SuperCMIT indoors under any circumstances other than perhaps a very large space (an open studio or an arena perhaps).
The noise-cancellation has a couple other effects as well: It suppresses reverb along with everything else, which makes it terrible for reproducing a sense of space (realism), but very good at creating a dry recording that is perfect for sound effects and post-processing. It brings the on-axis sound perceptually closer (because the relative level of direct and indirect sound is part of how we perceive distance). Put another way, the SuperCMIT has more “reach” than just about any mic I’ve heard, at least when used in an open space. I was able to zoom in on sounds in a way that other mics aren’t capable of.
Despite its very effective isolation, the SuperCMIT is only so-so for creating focus. Why? Because there’s more to creating a perceptual separation between wanted and unwanted sound than making the wanted sound louder and the unwanted sound softer. Perceptual focus is more about making the unwanted sounds blend into the background than about removing them entirely. This is similar to why the SuperCMIT sounds noisy; it takes away so much background noise that whatever is left draws attention to itself, whether it is a car starting in the background or the self-noise of the microphone. The SuperCMIT’s extremely neutral off-axis response works against it here. Frequency is important to creating a background to blend into: background noise is a low frequency cacophony that our brains tune out while we focus on the sharp, percussive, higher frequency sounds that capture our attention. By maintaining the fidelity and frequency-balance of its off-axis sounds, the SuperCMIT robs us of the frequency cues that we need to direct our attention.
The SuperCMIT is perfect for creating dry, isolated dialogue in a controlled outdoor environment. It excels at removing unwanted ambience and bringing the subject perceptually closer to the mic. I would also use it for outdoor interviews where I know the recording will play under visuals that don’t match the surroundings. The isolation comes at a price: It removes the sense of space, and makes voices seem disembodied and unnatural. It sounds accurate (in terms of frequency response), but it doesn’t sound good. I would not use it in any sort of verité environment, or on any shoot where I think there’s a chance the raw recording would find its way into the final mix without post-processing and sweetening. I wouldn’t use it as an everyday mic, but it is such a unique tool that I can see myself holding on to it for the circumstances that call for it.
CS-3e
I was surprised how much I disliked the CS-3e. In my area, this is the outdoor microphone for most mixers, on account of its reputation for superior isolation due to its ability to cancel noise via a line array of capsules. I don’t feel this reputation is warranted.
Like the SuperCMIT, the CS-3e’s cancellation does suppress quite a bit of reverb, which makes the on-axis sound drier, and helps separate the on-axis sound from the background. This gives the microphone more “reach”. Also like the SuperCMIT, this only works outdoors; indoors it just sounds indistinct and phasey. Unlike the SuperCMIT, the CS-3e didn’t seem to help with isolation very much; though it did filter out the low frequency rumble of distant traffic. The off-axis rejection is quite unbalanced, with fairly strong low-frequency rejection, leaving a very coloured, unrealistic sound behind.
This colouration is the reason I don’t feel the CS-3e lives up to its reputation. In objective terms, I believe it may measure well (reducing low frequencies off-axis probably reduces a lot of the sound power, which would be noticeable on a VU or peak meter), but it does not create good perceptual focus. By removing the low frequencies, it leaves behind an unbalanced midrange that draws attention to off-axis noise instead of hiding it. This is similar to the SuperCMIT’s noise floor problem: By removing too much, it reveals sounds that would otherwise be hidden. When I did the drive-by test, the engine starting in the background was by far the most distracting on the CS-3e because I hear it most clearly. And, not only was it distracting, it also didn’t sound good because the off-axis colouration was so severe. In absolute terms it may have had slightly less traffic rumble (I would guess at most 3dB) compared to the non-noise cancelling CMIT and MKH-60, but the price of this reduction was that non-ambient off-axis noises were much more noticeable.
Even though the on-axis performance sounded fairly flat and neutral, the amount of reverb suppression combined with the very unrealistic sound of the surroundings gave an overall impression that was quite unrealistic. In my previous experiences booming with the CS-3e for other mixers, I always had the impression that the mic sounded “harsh”. I think the lack of realism that I discovered here explains why I had that impression.
Another issue I had with the CS-3e was noise floor. It was easily the noisiest of the mics I tested, with the noise floor clearly audible in ambience, even outdoors. I’m guessing this is partly because the noise of three capsules in the design is additive, but knowing why the noise is there doesn’t change the fact that I can hear it. It may also have suffered from the same effect as the SuperCMIT, where the cancelled ambience drew attention to the noise floor that wasn’t cancelled.
I think the only use-case I can think of where I might want to use the CS-3e would be in a controlled environment (i.e. drama with a lockup) with a loud, immobile noise source (generator?), while also knowing that side noises were unlikely (and therefore the colouration wouldn’t be an issue). The 180° rejection of the CS-3e was very, very effective, much more so than any of the other mics, including the SuperCMIT, so perhaps I would use it if I needed the absolute strongest rear null I could find.
CMIT
The CMIT lived up to its reputation for having a smooth roll-off in its off-axis response. It sounded clear and neutral on-axis, and audio that wasn’t quite on-axis also sounded clear and neutral, though quieter than the no-axis sound. This was true up to about 90°, beyond which the smooth response fell apart. Sound from behind the mic sounded fairly distorted and unnatural. In practice, I don’t think this matters much, since in a typical booming position, the rear of the mic is facing the sky where there are no noise sources (other than planes), and therefore no reason to worry about it sounding unrealistic. I might choose a different mic (SuperCMIT or MKH-60) if I knew I had a sound from the rear that I wanted to reject in the background, but this isn’t a super common scenario.
Listening to the overall impression created when recording an on-axis voice in a live environment, the CMIT sounded neutral and realistic. It did a good job of separating the main subject from the background, and the background was quieter, but still part of the scene. This pretty much matches my preconceptions for what I thought a good shotgun mic should be when I started the test. It has good isolation, and it is transparent and realistic in most circumstances. I can see why it has the reputation it does, and why it is a workhorse for so many mixers. It sounded decent, but not amazing indoors, with a noticeable bump in the lower frequencies that I attribute to the proximity effect being amplified by the reverb in the room. It sounded flatter and more neutral outdoors. I could live with it indoors if I had to, but there are definitely better options (it’s a shotgun, so that is expected).
I have two negative things to say about it. One, its isolation was about 3dB poorer than all the other mics in the test. Meaning: The outdoor ambience was about 3dB louder compared to the primary signal. This had a small effect on perceptual focus, but wasn’t a deal-breaker in my opinion. Perceptually, I found the noise-cancelling mics to be worse for focus because of the way they cancelled the background ambience but left behind off-axis noises (even if those noises would have measured quieter than the CMIT). Two: The noise floor was audible in the ambience. I wouldn’t say this was a major issue; it was quieter than both the CS-3e and the SuperCMIT. But it was there. My test was in a fairly quiet outdoor neighbourhood, so there are probably many places I could have recorded where the noise floor would have been completely buried.
Overall, I would have no hesitation using the CMIT as my default outdoor boom mic. It sounds great, provides isolation, and is forgiving off-axis. I especially like it for documentary scenarios where I can’t always anticipate who is talking next; the smooth off-axis and lower isolation is actually helpful in this scenario, since it is more forgiving if I don’t anticipate who needs to be on axis. However, I probably wouldn’t choose it in an uncontrolled environment (e.g. a crowd) where separating the voice from the environment is the primary need.
MKH-60
The MKH-60 stood out to me as the best overall compromise between the various priorities that I identified in my preliminary analysis. It didn’t stand out as the best for any individual trait except noise floor, but I think it is the mic I would choose first for the broadest number of scenarios.
I thought it was the most transparent, realistic-sounding mic. This was most apparent to me when listening to the crows: Their calls had the longest, nicest-sounding reverb tail, which also happened to be the closest to what it sounded like when I took my headphones off and listened to the live sound. It gave me the most realistic sense of space of all the microphones. In most scenarios, this was a pretty impression. In many respects, it sounded very similar to the CMIT, and I had trouble distinguishing the two mics at times (usually when there was no obvious off-axis sound to compare). The extremely low noise floor also helped with transparency; it is the only mic whose noise floor I couldn’t hear in any of the tests. In fact, the only time I have ever noticed its noise floor was when I was recording in the arctic, 400 kilometres from the nearest road.
It is not as smooth off-axis as the CMIT; it sounds dull and slightly muffled from pretty much every angle off-axis. On the plus side, once the sound is off-axis, it sounds consistent and doesn’t change much from front to back the way the CMIT does. Sources from the rear sound much the same as from the side or as little as 30° off-axis. This is the classic shotgun sound: good high-frequency isolation but not much low-frequency directivity.
I didn’t realize until this test how useful this characteristic actually is. Even though it’s objectively less neutral, the lack of low frequency isolation actually helps the shotgun do its primary job, which is to create perceptual focus on the on-axis sound. The high-frequency roll-off means the off-axis sound all mushes together into a background rumble, and it becomes much harder to distinguish individual sounds compared to the on-axis sound. It prevents off-axis sounds from being distracting because our brains tune out the muddy rumble and focus on the mids-and-highs that stand out from all the rest. I have changed my opinion about wanting my shotgun mics to have neutral off-axis frequency response. This would be desirable when recording music or ambience, but it is not desirable in a mic that is intended to direct the listener’s attention.
This conclusion comes directly from what I experienced in the outdoor listening tests, especially the car drive-by. In this test, the engine starting in the background was the least distinct of all the microphones: I could barely hear it, and I wouldn’t have noticed it at all if I wasn’t looking for it. This is very different from the CS-3e, where the engine start-up was crystal clear and quite distracting. I daresay the background ambience in the MKH-60 was objectively louder, but it was easier to tune out and less distracting.
It also sounded more natural. I would say the CMIT, and especially the SuperCMIT had better fidelity (i.e. they were closer to what the car startup sounded like in person), but the MKH-60 was closer to what I expected to hear when I was trying to listen to the dialogue and not the overall soundscape (which is the normal scenario for a shotgun recording).
The muffled off-axis response does have a disadvantage. My ability to comprehend speech was slightly worse than with the other mics when competing with background noise. I attribute this to a combination of less reverb suppression and the background ambience being more broadband. In a very noisy or heavily reverberant environment where every dB of separation counts, it might make sense to sacrifice the realism of the MKH-60 for the better isolation of the noise-cancelling SuperCMIT. But it wasn’t a big difference, and in most scenarios I would pick the better realism and perceptual focus over pure comprehension.
Conclusions
When I started this test, I was not expecting that my workhorse MKH-60 would end up being my favourite. This is the cheapest, most unassuming and “normal” shotgun of the bunch, and it’s out of production to boot. Most of the other mixers I know use the more expensive CMIT and CS-3e as their default shotgun mic, and I assumed that there was good reason for that preference. I have learned to disagree. Knowing what I know now, I will continue to use the MKH-60 as my everyday shotgun mic. It is the cleanest, most detailed, most realistic sounding mic of the bunch, and I really liked its ability to focus attention.
I’m having trouble deciding whether I want to keep the CMIT. The CMIT and the MKH-60 are quite similar; the differences are subtle, and there aren’t a lot of scenarios where one isn’t as good as the other. But difference do exist, and, although I prefer the realism of the MKH-60, I think the more forgiving off-axis performance of the CMIT might be useful considering the amount of unscripted documentary I do. While I’ve never had any complaints about the MKH-60 doing documentary, I think the CMIT is a slightly better choice for run & gun situations where I’m following multiple characters and I don’t anticipate needing to reject a lot of off-axis sound.
I did not anticipate how much I would dislike the CS-3e. To my ears, it just doesn’t sound good, and its the only mic I can’t see keeping in my kit for special circumstances. I confirmed my subjective impression that it sounds harsh, and, despite the noise-cancelling design, I found that its off-axis performance made it more distracting, not less. And, for anything where I really do need more reach or noise-cancellation, the SuperCMIT was both more isolated and less coloured.
I don’t love the SuperCMIT as an everyday mic. In most scenarios, I don’t like how artificial it sounds, and the artificiality is a direct consequence of its noise-cancelling design. I find the lack of “space” disorienting. But, it is doing exactly what it is designed to do, and there are scenarios where a very dry, clean signal is exactly what is needed. It is exceptional at separating the on-axis subject from the background noise, and it has more “reach” than any other mic I’ve experienced. I would use it for sound effects, and in noisy environments where I know post will be processing it heavily anyway. It’s good at suppressing reverb in open environments (though not most indoor locations), and I can see myself using it in spaces like gyms, arenas, or churches. I might also use it for outdoor “wilderness” interviews when there is traffic nearby, or when I know it will be edited under visuals that aren’t outdoors. There isn’t really anything else like it, so I would keep it for the scenarios where I know it can do something that other mics can’t.
I was surprised how little difference there was in isolation or the ability to suppress background ambience between the mics. The SuperCMIT was slightly better at isolation than the others, and the CMIT was slightly worse, but I was expecting the amount of isolation to be a major source of difference, and it wasn’t. Moreover, I discovered how unimportant the amount of isolation is perceptually. Making the background a couple dB louder or softer had very little effect on how I perceived the recordings; for the most part, my brain just tuned out the background and focussed on the foreground. There is certainly a threshold where the background would be come too loud and would start to affect comprehension, but for the most part, good recording technique would be way more important than the amount of isolation. Yes, there are sometimes technical reasons when every dB counts, but for the most part, I found that the frequency balance of the ambience and off-axis sound played a much larger role in how easy it was for my brain to focus on the main dialogue.
I was also surprised how little difference there was in on-axis response. If I wasn’t listening to the background, or there wasn’t much in the background to listen to, all of the recording sounded very similar, and frequency response simply wasn’t an issue. For the amount of attention it gets on the spec sheet, frequency response simply wasn’t a factor in differentiating the mics. Perhaps that’s because I’m testing a slate of very high-end mics, but I’ll give it less weight when thinking about mics in the future (and, it’s usually easy to change with EQ anyway).
Finally, although it’s common sense (at least among professional mixers) that shotguns aren’t intended to be used indoors, I somehow assumed that I could make useful judgments between them by testing them indoors. This was a mistake, and I mostly didn’t consider the indoor performance in my analysis and conclusions. But, I leave the tests as an example of how ineffective they were. It was also useful to hear just how bad the two noise-cancelling mics sounded indoors; both sounded robotic and phasey trying to deal with room reflections. I was a bit surprised that neither noise cancelling mic had any effect on the level of the room tone; my tentative conclusion is that room tone is so diffuse and omnidirectional that the noise-cancellation simply couldn’t operate.
These tests challenged what I thought I knew about what a “good” shotgun mic is, and they have changed how I think about my goals for “good” sound. I’ve never had the luxury of a large mic cabinet before, but I now have a much more nuanced idea of which priorities I’m trying to balance when I choose a mic. Do I need the most isolation (probably not). Do I need to suppress background ambience, or a specific off-axis noise? How dry do I need the subject to sound? How much do I want to place the subject in the space that I’m recording in, and how much do I want to abstract the subject from its surroundings so it can be processed in post. These are all questions I thought I knew the answer to before, and which I now have different answers to. I hope that my learning with be your gain as well.