|The latest AF systems mean I can take for granted that the photo will be focused where I want it to be, so I can think about composition, concentrate on interacting with my subject and capture the right moment.
Photo: Richard Butler
My colleague Barney has already had a look at the broader trends in the industry over the past ten years, so it’s fallen to me to have a look at how the technology has changed in that time. From the perspective of someone who’s spent all of the last decade testing and reviewing cameras I’m going to argue that the two biggest areas of improvement and change have been autofocus and video.
Barney wrote that in 2010 we had ‘DSLRs with highly advanced autofocus systems,’ while the early mirrorless autofocus systems were often slow and clumsy. But in the decade that’s followed, we’ve seen mirrorless AF not only catch up to DSLRs, but to begin to offer greater capabilities, often in an easier-to-use manner and across a much broader range of the market.
You don’t need to buy a D300S-level camera to get what used to be considered ‘pro-grade’ AF performance: you can find it, and a lot more, in sub-$1000 cameras that you can essentially point and shoot with. A number of changes that have brought us to this point.
Lenses designed for mirrorless
One of the biggest changes is probably the hardest to see: a change in the way lenses are designed. The the brute force approach of ring-type focus motors and unit focus designs (moving a unit with multiple lens elements) used in DSLRs isn’t a good fit for the way most mirrorless cameras need to operate.
Those large focus elements meant a lot of inertia, which is a problem for the back-and-forth movements required by contrast-detection autofocus. Secondly, while ring-type motors are great at moving quickly, they’re not the best choice for moving slowly, smoothly and quietly, as required for video shooting.
In recent years we’ve seen many manufacturers change their optical designs so that they can be focused with a single, lightweight focusing element. With less inertia, these can be moved with greater subtlety. The latest lenses often feature two independent focus groups, helping to avoid any deterioration in quality at close-focus distances.
|The retractable design of Canon’s RF 70-200mm F2.8 has caught all the attention, but the use of independent focus groups, both light enough to be driven by innovative ‘Nano USM’ motors, is also a huge departure from its DSLR counterpart.|
Alongside changes in optical design, we’ve also seen the development of new types of focus motor, usually less powerful than ring-type ultrasonic motors but instead able to provide both speed and precision control for these small-focus-element lenses. The overall result is a new generation of lenses that can perform as well or faster than their DSLR predecessors, while also providing visually smooth focus for video.
On-sensor phase detection
In parallel, we’ve seen the development of on-sensor phase detection technologies. First appearing in Fujifilm compacts, then Nikon’s 1-series mirrorless cameras, before being widely adopted by other companies. At their most simple, these systems selectively look at the scene through the left and right sides of the lens, building up a sense of depth in the scene, much as humans do by comparing the information from their left and right eyes.
|Canon took on-sensor phase detection one step further: its dual-pixel design uses split pixels to let it derive distance information from every location.|
This depth information is then used to assess which direction and how far to drive the focus element, much as the dedicated sensor AF did on DSLRs.
The other major leap forward has been in subject-aware autofocus. Nikon in particular had made some steps in this direction using its DSLRs’ RGB metering sensors, but the move, with mirrorless, to focusing using the main imaging sensor has allowed cameras to develop a much more sophisticated understanding of what they’re shooting.
The latest generation of cameras are beginning to use AF algorithms trained by machine learning
Face Detection had featured in compact cameras for some time, but the power and accuracy of such systems has changed completely in the past few years. Olympus introduced eye-detection AF in 2012’s E-M5 and such systems have only got more responsive and more reliable as further development, greater processing power and input from on-sensor phase detection have progressed.
Which brings us almost up to the present. The latest generation of cameras from Panasonic and Sony use AF algorithms trained by machine learning (analysis of thousands of images), that let the cameras recognize what they’re focusing on. This lets them stay focused on people or pets without getting confused if the subject turns away from the camera. To the point that the latest $600 mirrorless camera will give a 2010 pro-sports camera a run for its money. Perhaps even in the hands of a beginner.
|The Sony a6100 is a pretty modest model in many respects, but it has an AF system that’s both easy to use and in many respects more powerful than the pro DSLRs of ten years ago.|
I didn’t notice the full impact these changes had made to my photography until this article forced me to think back to how I shot cameras in 2010. Back then I’d have mainly stuck to AF-S, solely using AF-C for sports shooting, and would have expected to have to keep the camera pointed at my subject, when doing so. These days I take for granted being able to leave most cameras in AF-C and use AF tracking for almost everything. And the cameras with responsive eye detection have become the ones I most enjoy for portrait shooting, simply because it frees me up to talk to my subject and devote more of my brain to lighting and composition: knowing the subject will be in focus.
This is only likely to continue to improve, especially as traditional cameras try to stay competitive with the smartphones backed by the computing know-how and seemingly endless R&D resources of the likes of Apple and Google.
The other obvious change of the last decade has been the ever evolving quality and capability of video capture in stills cameras. Ten years ago, video from stills cameras was in its infancy: the Nikon D90 and Canon EOS 5D Mark II had brought high resolution video to consumer cameras just a year before, and Canon was seen as the preeminent video tool for keen videographers and small production companies.
Ten years later and we’re testing a camera that can produce 4K footage good enough for high-end professional video production, and even the sub-$1000 models from most brands are packed with an array of video tools that easily eclipse the 5D Mark II.
I remember being amazed when I first saw this clip from the GH3 on a 1080 TV. I also remember how piercing the sound was, as Clan Line passed inches from my head, as I shot it.
To an extent, much of the story can be told by following the progression of Panasonic’s GH series. After the success of the EOS 5D II, Canon switched a lot of its video efforts on the more pro-focused Cinema EOS line, leaving the way clear for Panasonic to produce a succession of stills/video cameras with ever more high-end video features and ever more impressive output.
The GHs were some of the first stills/video cameras with 1080p video, the first to shoot 1080/60p, the first to shoot 4K video and the first to shoot 10-bit footage. They were also some of the first cameras we saw to include features like focus peaking, adjustable zebra exposure indicators and, more recently, vectorscopes and waveform displays.
|The Panasonic GH5S became the first stills/video camera to offer a waveform display for assessing video exposure.|
There’s also perhaps a history to be written about the hacking projects that helped extend the capabilities of both Canon and Panasonic’s video cameras (which perhaps made clear to manufacturers how dedicated and eager the audience for such cameras was).
These camera in particular have been responsible for much of what I’ve learned about video shooting: each successive model has forced me to go off and learn or go out shooting to make sure I appreciated how each feature and spec addition helps for videography.
Having to learn to shoot video for the reviews I’ve written has kindled a real personal interest videography
Sony brought many of these things to the mass market, incorporating many of these specs and features to its more mainstream models. Panasonic’s GX8 beat the a6300 to the punch, in terms of offering 4K, but the Sony added previously exotic features such as Log capture, which inspired me to embark on my first proper video shoot.
But video is no longer the preserve of Panasonic and Sony. I doubt anyone would have predicted the speed with which Fujifilm has gone from producing some of the worst video in the industry to some of the best. Interestingly, things have almost come full circle; with Nikon offering Raw video output from its Z6 and bundling the camera with a gimbal and external recorder for budding film makers. That’s fair leap forward compared with the D90.
Shooting this video involved learning to use a one-handed gimbal, which is tremendous fun. The final result is probably the creative work I’m most proud of, from the last ten years.
Interestingly, these developments are beginning to dovetail with AF changes I described. In much the same way that Pro sports shooters are still unlikely to depend on subject tracking, many professional videographers will continue to depend on their own skills and experience. But for the rest of us? Video autofocus is only going to get better at maintaining focus where we want it, or smoothly transitioning between selected subjects.
These improvements in video and autofocus will just make life easier, meaning we can concentrate on the creative aspects that matter.