The Extraordinary Secret of Cephalopod Vision

All Draw Curiosity videos are fully subtitled in English and Spanish. The blog post builds on the concepts touched upon in the video.

I hope you enjoyed this video! I had loads of fun crafting and designing it – I never expected blurry eyes, cameras and cuttlefish to be related in any way! So… let’s dig a little bit deeper into some of the concepts covered in the video:

How do certain drugs cause the pupil to dilate?

The scientific term is known as ‘mydriasis’. Certain medications cause the pupil to dilate by blocking the muscarinic acetylcholine receptors in the eye, removing their ability to respond to light. The radial fibres found in the iris, are forced to contract, leading them to increase the aperture of the eye. There are drugs used specifically by opticians to induce pupil dilation, but many other medications, including anticholinergics and those that affect the serotonin levels in the brain also have the ability to affect the iris’ response to light.

How can I take beautiful bokeh pictures?

Bokeh is the term in photography which relates to subject isolation, meaning that the subject of the photograph is in focus, whereas the background is out of focus, creating an aesthetically pleasing effect.

There are two main techniques I use to take pictures with a beautiful blurry background:

  • Open the aperture wide. The lower you can stop the F-stop on your camera, the wider the aperture and the more light will enter the camera. This narrows the depth of field, which is the volume in focus to the camera. By focusing the depth of field on your subject, you can easily isolate it from the background, and the wider the aperture and better the quality of your lens, the softer and blurrier the background will be.

    This picture was taken with a Canon 50mm F1.8 STM with the aperture at its widest.
  • Zoom in. Lenses which zoom in more tend to generate a similar effect.

    This picture was taken with a zoom kit lens at 55m F5.6. Although F5.6 isn’t that low, viewing at 55mm versus the wider angle 18mm still narrows the depth of field significantly.

Do cephalopods really see like that?

In theory, no one knows, in the same way none of us know how anybody else sees, and whether the way we see is the same. However, my reconstruction is based upon my interpretation of the literature on cephalopod vision, and particularly by the research carried out by Stubbs & Stubbs which you can read in depth here: “A Novel Mechanism for Color Vision: Pupil Shape and Chromatic Aberration Can Provide Spectral Discrimination for Color Blind Organisms” by Stubbs & Stubbs 2015

Why do they see the world that way?

Although cephalopods and humans share a similar camera shaped eye, the way it is designed to detect and discriminate the world is adapted in very different ways. The main features that distinguish their eyes from ours is the fact that they have a single photoreceptor type, a U or W-shaped pupil and a very fast focusing lens.

Cuttlefish W-shaped eye

Humans can detect colour using photochemistry, which integrates the signal of three specialised photoreceptors, also known as the cones. Cephalopods only have one, meaning in theory they are only capable of perceiving shades of grey. However, rather than relying on specialised chemistry, they use physics to split the light into its constituent wavelengths, much like a prism, and focusing each one on a different area on the back of their eye.

They do this due to the shape of their pupil. Pupils that allow light to enter off centre, such as extremely dilated circular pupils, or their own U or W-shaped pupils, are capable of affecting light in that way. In fact, the unique shape of their pupil means that regardless of the light conditions present, light always enters off-axis and thus always has this effect at the back of their eye.

Additionally, their lenses focus much faster than our own, so they are able to swivel across the different wavelength spectra to analyse their surroundings. It is unknown what cognitive process may follow this, and whether they may be capable of stacking the different slices and perceiving them as colour or not.

Why do cephalopods see better in low light conditions?

Cephalopods have a single photoreceptor type, whereas we have several. Our retinas are lined with rods and cones. The former are sensitive to a large spectrum of light wavelengths, meaning that detecting light at any of these wavelengths sends a message to the brain. Cones, which detect colour, have a much more reduced window of sensitivity. There are three types of cones, red, blue and green; which reflect the colours they are most sensitive to. Depending on the cones which are activated in certain areas of the retina, the brain is able to construct the colour which is being seen. However, due to the reduced sensitivity, a lot more light is necessary to adequately stimulate the cones and receive a colour signal. This is why in low light conditions we only see in black and white, because our rods are sensitive to a much wider spectrum of wavelengths. If our entire retina was lined with rods, rather than rods and cones, our sensitivity would be much better and we would likely have fantastic night vision. This is the principle the cephalopods are exploiting, all of their photoreceptors are sensitive to a wide spectrum of wavelengths, allowing them to see clearly at low light levels at the bottom of the ocean.

If you would like even more in-depth information, please don’t hesitate to comment – there is so much more to be said on these topics!

I hope you enjoyed and learned something new today! Let me know what you think in the comments – I would love to know! If you enjoyed this blog and would like to be notified of new entries, consider signing up to the mailing list here and subscribing to the YouTube channel!

8 thoughts on “The Extraordinary Secret of Cephalopod Vision

  1. I found this absolutely fascinating. Perhaps its partly because I am a scuba diver and really like cephalopods but I also found your presentation really engaging. Well done you.

    1. Thank you so much – very happy to read you enjoyed this!
      I love everything about sensory ecology, and when I found out about this research I knew I wanted to talk about it somehow – so clever and yet makes perfect sense!

      ~ Inés

        1. Thanks for sharing! 🙂

          (I’m of the mindset that sharing is caring – so it is much appreciated!)

  2. This article talks about some people with no colour vision and fantastic night vision.

    I wonder if this effect could exploited for making low light cameras that still have colour, or possibly other areas of the spectrum.

    I came here from FB A Capella Science post so you’re get around on FB 🙂


    1. Hey Grant – welcome to Draw Curiosity and thank you for linking me to such an interesting BBC article – I love it when people send me new things to read!

      You are completely right with regards to black and white vs. colour being of good application in cameras. Off the top of my head I can think of two instances where colour is sacrificed for getting better contrast and visual acuity in low light environments:

      • Nightvision cameras which use Infrared (IR) lighting. IR cameras detect infrared light, so they are usually equipped with IR continuous lighting, or an IR flash, which is generally imperceptible to whatever they are photographing. They are popular in camera traps which survey the animals that walk past a certain area at night, as most mammals are nocturnal and also avoid human presence, so they are really the only direct way of surveying them (other methods involve looking at their droppings and their footprints, which isn’t nearly as exciting!). At home, my partner set up an IR CCTV connected to a raspberry pi at home. During the day there is enough light to get colour images, and at night we turn on IR illumination to get a B & W image.
      • High-speed cameras. I study insect flight in the lab, and part of my research involves collecting high speed footage to then analyse. Although there are colour-detecting high speed cameras out there (used in commercials and very expensive to hire!), most of those used in research which can only attain higher frame rates by filming in B&W with precisely this reason in mind – you can get a lot more information by just picking up information in brightness rather than colour, because to film high speed you need to have extremely fast shutter speeds (right now I’m working with 3800fps and 1/60000 shutter speeds) which require very bright lights to start off with. If one was to record in colour, you’d probably need 2 – 4x the brightness which sometimes isn’t feasible. In my latest video on drones you can see a collection of highspeed footage which is both in colour (stock footage) and in B&W (which is my own, though I gave them coloured backgrounds to make them more interesting).
      • Historically, before cameras were designed to have the sensors and technology they have today, they were only able to photograph in black and white due to light constraints. In fact, the reason people never smiled in old photos is because the camera required at least 15 minutes of exposure time in order to get enough light to record an image, and people knew that putting on a fake smile for so long would probably lead to pulling a few facial muscles :p So they kept their normal faces on itself.

      I’m a bit of a camera-geek – but if you’re interested in learning more about specialised cameras, there was a fantastic documentary released in 2004 by the BBC called “Animal Camera”, and I recall they covered everything from nightvision, to tiny mountable cameras to get First Person View on birds to thermosensitive cameras. Technology has only gotten better since then, but they do a great job of explaining what each of these cameras do – and are probably part of the reason I’m so interested in this subject today!

      Thank you for leaving me such a lovely comment!

      1. Thanks for the BBC link I have watched a few clips now. For my high speed video watching I usually watch Smarter every day on YouTube the latest
        is combustion at 20,000 fps in colour. His channel is popular enough that the video camera manufactures give cameras to play with 🙂


      2. Imaging satellites take higher resolution pictures in grayscale. A common technique is to take a photo in each primary colour and then combine them with a higher resolution photo taken in grayscale resulting in a high resolution colour image.

Leave a Reply

Your email address will not be published. Required fields are marked *