I’m not going to lie, this is a large document! I wasn’t originally planning on doing such an in depth write up for the Raspberry Pi Film Telecine originally. However, as the project developed, it was clear that there was going to be a lot of technical information!

There’s 80 pages of raw document for the project, as well as another 60 for appendices, references and technical standards details.

Is this the “best” way to make a Raspberry Pi Film Telecine? Not at all. There are a number of corners cut with a setup like this! Some of them were covid issues with completing the project during lockdown. Yet some of the problems were self-enforced limitations (namely cost).

What I hope is that this will serve folks out there who want to make their own or even those who want to try their own setup.

It worked out well enough for me as I ended up receiving a First Class with Honours, plus I recently received The Institution of Engineering and Technology (IET) Education Award for this project!

So I hope you enjoy this document on my Raspberry Pi Film Telecine project. Just keep in mind that this is a scientific write-up, so it might be a bit dry for casual readers.

NOTE: If you wish, you can also download the pdf of the dissertation here.

Also, as folks have been asking, here’s a summary video of the different steps in the stabilisation and colour correction:

 


 

Department of Electronics & Electrical Engineering

Telecine Prototype

Dissertation

Christopher Cunningham

Supervisor: Ellis, D

2nd Marker: McMahon, R

 

Abstract

Due to the enormous scale of research and development that goes into modern video technology (and the costs that ensues), the general rule of thumb is the more expensive the video product, the more advanced the product and the better it is at accomplishing the function it was designed to do.

The aim of this project is flip that concept around and research, design and demonstrate a cost-effective telecine scanner prototype.

What is a cost-effective prototype? One that provides a standard definition resolution output at a cost that is cheaper than the currently available commercial products. This was accomplished by following the key aspects of the project aim.

Research: with the understanding that film as a technology has been around for a (relatively) long time and would be considered legacy technology by modern digital standards and yet as such there exists a great wealth of professional scientific research and standards on film, and with that understanding of film projection and film transfer available to be used for reference.

Design: with the acknowledgement that with modern visual technology advancing far beyond the capabilities of common home film formats, and as such the tools for handling the transfer of film to digital can be done by utilising inexpensive components.

Demonstrate: that through careful consideration of the research and creation of a suitable design, that a cost-effective telecine scanner was able to be constructed as a prototype.

The prototype detailed in this project was able to be developed this way using just over £150 worth of components and was able to output at PAL UK resolution, making it a more cost-effective solution when compared to other standard definition commercial solutions currently available on the market.

Table of Contents

Abstract 3

1. Introduction 8

2. Project Aim 9

3. Project Objectives 9

3.1 Review Technical Requirements of film for scanning 9

3.2 Identify Commercially available equipment and services 10

3.3 Design a suitable implementation for a film scanner 10

3.3.1 Analysing suitability of hardware and software for film scanning 10

3.4 Construct a prototype scanner 11

3.5 Obtain Test Results of prototype scanner 12

3.6 Assess the output quality of the scanner relative to cost versus other products 12

3.7 Compare and contrast the test results against both predictions and specified requirements 12

3.8 Evaluate the success of the project 12

3.9 Propose areas for future work 13

4. Literature Review 14

4.1 Scanning quality and video output 14

4.1.1 Scanning Solutions 15

4.1.2 Justification for prototype telecine film scanning format 18

4.1.3 Justification for prototype telecine scanning quality 19

4.2 Hardware and Software considerations 20

4.2.1 Single Board Computers 20

4.2.2 Justification for not considering Arduino 21

4.2.3 Justification for Raspberry Pi 3 Model B+ 21

4.2.4 Raspberry Pi Operating Systems 22

4.2.5 Justification for Raspbian as Operating System 26

4.2.6 Projector Bulb Considerations 27

4.2.7 Justification for LED Bulb 30

4.2.8 Justification for a Raspberry Pi Camera 30

4.2.9 Power Delivery 31

4.2.10 Sourcing Additional Super 8mm Components 32

4.2.11 Justification for 3D Printed Parts 33

4.2.12 Justification for Main Chassis 34

5. Method 35

5.1 Raspberry Pi 3 Model B+ Performance 35

5.1.1 CPU/iGPU Overclocking 35

5.1.2 Boot Solution – USB Data and SD Card Tests 39

5.2 Raspberry Pi Camera v2 Initial Hardware and Software 41

5.2.1 Test Charts 42

5.2.2 Camera Stability 44

5.2.3 Camera Software Comparison 44

5.2.4 Creating Camera Python Code 45

5.2.5 Google Drive Automatic Synchronisation 50

5.3 LED Control 51

5.3.1 LED Control Python Code 52

5.4 Initial Chassis Design 54

5.5 Motor Power Delivery 56

5.6 Motor Control 57

5.6.1 Motor Control Python Code 58

5.7 Running Multiple Python Scripts 61

5.8 Full Chassis Build 63

6 Results 64

6.1 Stability 64

6.2 Colour 66

6.2.1 Removing ‘Red Fade’ Effect from Blue/Cyan Film Emulsion Dye 67

6.3 Project Cost Effectiveness 69

6.3.1 Component Cost 69

7 Discussion 71

7.1 Power Delivery 71

7.2 3D Printed Components 72

7.3 Camera 72

7.3.1 Camera Lens 73

7.4 Python Coding 73

7.5 Overall Raspberry Pi Functionality 74

8 Conclusion 75

8.1 Reflection 75

8.1.1 Meeting the Aim and Objectives 75

8.1.2 Evaluation of Outcome 75

8.1.3 Improvements 76

8.2 Recommendations 78

8.2.1 Accuracy of Film Delivery to Gate 78

8.2.2 Film Delivery Components 78

8.2.3 Camera 79

8.2.5 Power Delivery 79

8.2.6 Further Study 80

Appendices 81

1. Supporting Images from Swinson, P.R. (1995) 81

2. ASDA Photo Cine Film to DVD and USB Pricing Chart 84

3. Jessops Photo Cine Film Conversion product page 85

4. Kodak Express London 8mm Cine Film Cost Chart 86

5. C2DT Cost Chart 87

6. Alive Studios Quote Form 88

7. United Nations Security Council Resolutions 89

8. Additional Film Scanning Quality Issues 90

9. Table 1 – Single board computer specification comparison data sheets 91

10. Microsoft Insider Preview Builds 93

11. MASTER LED ExpertColor LED ExpertColor 5.5-50W GU10 930 36D Data 94

12. NEMA 17 Stepper Motor Specifications 98

13. L298N Dual H Bridge Stepper Motor Driver Board Specification Sheet 100

14. Sprocket and Rollers Design Images 101

15. Super 8mm Take Up Reel Design 103

16. Super 8mm Film table and Adapter 104

17. Stepper Motor Mount Design 106

18. Script for Raspberry Pi CPU cooling Tests 107

19. Script for SD Card Benchmark 109

20. Camera Mount and Film Gate/Plate 112

21. Raspberry Pi Camera GUI Software 114

22. Google Drive Authentication Information 115

23. Telecine Prototype Main Chassis Images 119

24. Installation of the DRV8825 Stepper Motor Drivers 121

25. Additional Motor Control Code 123

26. Contents of MakerBeam Regular Black Starter Kit 128

27. Original Project Timetable 129

References 130

Standards and Recommended Practices 140

1. Introduction

In modern film and TV production there is a constant back and forth between what is efficient from an engineering standpoint versus what the consumer of the content is now demanding. With the growing popularity of streaming, this content is consumed in increasingly advanced technological methods across an ever-growing variety of hardware.

Digital media is (generally) scalable and adaptable to different delivery platforms and easier to produce thanks to non-linear editing combined with high performance digital camera solutions. The technical hurdles come in the form of legacy film material, not just from the studios but from home users who are now seeing thousands of hours being lost due to failing hardware, deterioration of film quality and overall incompatibility with digital media. For photography, this has not been a drastic shift as even with a 36-exposure film, home scanners and commercial solutions are generally fast enough and of high enough quality to ensure digital transfer of frames. However, for videography the standard home-use Super-8 film would be using that many frames every 2 seconds, or to visualise that another way 1080 frames a minute. As such, more elaborate continuous scanning solutions are required.

 

Although large production studios can justify costs of high-quality scanning solutions, this is not the case for home use. This is a two-fold issue as productions have traditionally favoured the larger format 35mm film for a long period, going all the way up to the 70mm IMAX film format and with the production studios pushing their demand for professional scanning solutions of these, there is a cost incentive for scanning solutions from companies thanks to there still being a desire to shoot large scale productions onto film, rather than straight to digital. The same cannot be said for home users, however.

Although the 8mm and Super-8mm formats were the popular solutions thanks to low cost to quality output in the home, home users have now almost completely switched over to digital filming solutions for that same reason. As such, there is significantly less demand for the solutions for 8mm and Super-8mm scanning, which pushes up the costs for both home devices and commercial solutions which can utilise those formats. However, as legacy material ages, as well as new Super 8 products coming to market (KODAK.COM, 2017) there is potential for a surge in home use demand.

 

The focus of this project will be understanding the viability of low-cost components with the deliverable being a prototype system which is cheaper than currently available solutions while still maintaining an acceptable comparable result thanks to careful consideration of scanning performance combined with precise post-production techniques. Although the main deliverable will be the prototype, there is potential to provide sample footage from the prototype to provide context for the performance of the system to the Society of Motion Picture and Television Engineers (SMPTE) standards for home use.

2. Project Aim

The main aim of this project is to research, design and demonstrate a cost-effective telecine scanner prototype.

3. Project Objectives

3.1 Review Technical Requirements of film for scanning

With film having been around significantly longer than digital mediums, the initial objective for the project will be to review the technical requirements for scanning film and formulate suitable specifications for a prototype product. This will be on a purely theoretical basis initially, using any available SMPTE standards or those which were under consideration due to home consumer/commercial use not being as tightly controlled as professional standards.

Due to film products being cut from the same stock material, there are standards documentation available for the raw stock cores (SMPTE ST 37 1994) which allows for baseline quality calculations based off Swinson (1995) 1 who stipulates:

  • maximum pixel density “sweet spot” at 70 lines per mm of film
  • Maximum Colour Depth is 12 to 14-bit per colour channel

This results in an ideal resolution before you start to introduce noise into the scan from the film grain itself. The advantage for the project here is film is a medium that has been around far longer than digital, which results in a lot of technical limitation analysis having already taken place, which in turn allows this project to utilise the data efficiently and to build out the required specifications without guesswork.

3.2 Identify Commercially available equipment and services

In order to show to suitability of the project, several existing commercial services and products need to be investigated and compared to the prototype proposal. As the prototype is being designed as a home use product, the main comparisons will be quality of output as well as the cost of the product or service.

Although 8mm and Super 8mm film products were abundant and cheap for the consumer back when they were the standard home medium for users, this is certainly not the case today. As such what product and services now exist for the medium have become more niche and premium in price. However, as stated above the overall scanning quality for 8mm and Super 8 is closer to that of analogue colour video (as per ITU-R BT.601 and BT.1700) and as such consumers could be paying for a service or quality that is simply not obtainable with consumer products.

The main objective here will be to have the prototype cost less than a currently available product, while being viable as a standard definition alternative service.

3.3 Design a suitable implementation for a film scanner

As the project is not part of an electronics engineer or product designer project, the main objective to designing a film scanner is to take advantage of available commercial components and larger devices, as well as the simplicity of the “plug-and-play” nature of computer components. As the result is at best simply a working prototype, no consideration is needed for aesthetics of the prototype, rather just functionality.

The main objective focus here is through testing of individual components to create what is in effect a design like that of an 8mm/Super 8 home projector, only with the prototype instead being used to capture the individual frames of footage off the film reel.

3.3.1 Analysing suitability of hardware and software for film scanning

As the above design process goes together with the analysis of both suitable hardware and software, the objectives here become very circular in nature. Namely as deciding upon 1 component or a larger piece of hardware, then implementing it and testing with software for suitability, then moving onto the next component.

As stated above, cost to performance is key here and as such although there are many expensive components and products available on the market, the objective is finding components that are fit for purpose (namely having the prototype designed to output standard definition video) rather than spending more on hardware or software that might not be needed and over specification.

For computer components, the main focus will be on CPU stress testing to provide general performance stability and temperature dispel (overheating results in drop in performance as most modern CPUs have underclocking instructions on result of manufacturers TMAX being registered on the core to ensure survivability of the processor), as well as the storage medium read/write performance as this will be initially where the storing of the scanning data takes place, before any post-production work is undertaken. Additionally, having a main board that is easily customisable regarding attaching, then powering and monitoring additional components for the prototype is a must.

Once the initial hardware is decided upon, a suitable operating system will be chosen by analysing the commonly available operating systems. As the operating systems are almost universally open source software, analysis will instead be based around suitability for the project (for example, providing maximum flexibility on both available software and easy coding of custom software for purpose).

For the camera, consideration will be taken regarding compatible hardware with the main computer board and operating system to allow for easy setup and further testing. Testing setup will take its influences from the analogue standard test cards using the above technical standards and ITU recommendations for analogue colour video. Additionally, software will need to be used with the camera (either sourced or designed) for the hardware to produce footage to this standard.

3.4 Construct a prototype scanner

Assuming the above objectives are undertaken successfully, construction of the prototype scanner will need to be completed in order to complete the remainder of the project. This will also result in additional research and analysis of smaller components to create the prototype. This includes components like film spooling solutions, sprocket design for rollers, film gates, light sources, and camera mounts.

3.5 Obtain Test Results of prototype scanner

Once the prototype is operational, careful scanning of sample film (not concerned over destruction due to unforeseen errors in design) will be undertaken to allow for calibration and analysis of the prototype. Once the output is stable and the prototype is capable of a larger film reel, additional scanning will take place to provide a wider variety of footage to analyse.

3.6 Assess the output quality of the scanner relative to cost versus other products

Although this could include randomly selected 8mm/Super 8 film footage, for direct comparison marketing sample reels from popular films will be used. This gives the benefit of having professional standard digital copies of these films to compare the output from the prototype, in what should be a worst-case scenario for the output quality comparison as it will be compared against the best available standard definition footage.

Additionally, a review of final costs of the prototype versus other commercial products can then be undertaken with an aim to prove or dispel that the prototype has either matched, improved upon or fallen short of existing product outputs.

3.7 Compare and contrast the test results against both predictions and specified requirements

With the comparison data available from the previous aim, a review of the output based on estimated performance and chosen requirements can be undertaken. This is critical in understanding if the prototype has been a viable project from a technical standpoint.

3.8 Evaluate the success of the project

Rather than simply a technical review, this will also include analysis of the production process of the prototype, as well as a review of the volume of post-production work that may or may not be required to achieve the result of a successful telecine prototype. There will be additional discussion over the ease of use for an unskilled home user to replicate the work versus them purchasing a pre-made product or paying for a commercial service.

3.9 Propose areas for future work

A key importance for a prototype project is improvement. Due to the unforeseen nature of issues coming from designing a product from scratch, this review will be critical in both understanding how to improve on the existing design, be this from a technical output and overall performance of the prototype or more simple quality of life improvements.

Main areas to cover here will be scanning resolution and output, matching of scanning hardware to original filming equipment, as well as issues like speed of scanning versus professional solutions. Additional points will be added to this as the project progresses.

 

4. Literature Review

4.1 Scanning quality and video output

There is a large perception difference between consumer knowledge and professional standards. Although official bodies exist to produce professional standards in production, these are often lost or not understood by the majority of consumers. This has often been the case when talking about technology and technical terminology, however a simple current example is 4K Blu-ray film.

Consumers are generally aware enough that 4K is a higher resolution than normal Blu-ray High Definition, however there is less awareness that the majority of films are produced at 2K and upscaled to 4K. So, although the consumer is paying the premium for the higher resolution footage, they are not actually seeing the full benefits of 4K as the content was never shot at that resolution. This has resulted in resources online which are attempting to educate consumers when they are paying for upscaled 4K footage versus the real thing (4KMEDIA.ORG, 2019).

This is of crucial importance to this project as the output quality is directly tied to the film format being chosen for the prototype. Although a 4K scan of the film could be technically produced, as is stated by Swinson (1995) 1:

  • maximum pixel density “sweet spot” at 70 lines per mm of film
  • Maximum Colour Depth is 12 to 14-bit per colour channel

Although Swinson only calculates the resolutions for professional film standards (16-, 35- and 70mm), as film comes from the same base stock, the data for the consumer film sizes can be calculated. This results in an ideal resolution (ignoring pixel binning, RGB stacking/median blending) of Super 8mm film scanning at 1075 x 546 and standard 8mm film scanning at 1075 x 449 before you start to introduce noise into the scan from the film grain itself. As such, these are going to be used as the technical theoretical standards for the project, which will be used as a baseline to guide development of the prototype.

4.1.1 Scanning Solutions

The project aim is to design a cost-effective telecine prototype. In order to understand what would make this project cost effective, research into the various scanning solutions to understand pricing and output needs to be undertaken.

a. All-in-One Convert to Video (single device)

The All-in-One video scanner solution is a typically available commercial device and the closest to what this project’s prototype is based upon, being a more similar design to more professionally designed 16mm and 35mm telecine systems. Although Winait (2019) lists the system at £299, there are multiple variations of this exact system under different manufacturer names and additional costs like Reflecta (n.d.) for £463 and Wolverine (n.d.) for £473, suggesting that these models are made by a different parts manufacturer, before a generic shell body is applied.

Figure 1 – A Winait 5″&3″ Reel 8mm Roll Film & Super8 Roll Film Digital Film Video Scanner

With these systems, the consumer receives a high definition resolution output at 1080x1044p, with the footage saved to SD card. However there is no other listed specifications, nor breakdown of hardware performance, suggesting the manufacturers are selling the product in a similar way to televisions in that consumers will focus on the resolution first and foremost, rather than other important specifications for example, colour reproduction, frame rate, signal to noise ratio.

One additional performance indicator is the stated 2 frames a second processing speed. However, as reviews for the device dispute this (and the cost of the device to manually test), this cannot be taken as a reliable specification.

Due to the lack of additional available information, the 1080x1044p output and £299 cost will be the base comparison for the project prototype. Although as stated above the overall speed is to be disputed, for the project the 2 frames a second processing speed will also be used.

b. Convert to JPEG (convert to video via PC)

Although traditionally used to convert 35mm photography film to digital, this particular variation of film scanner is going to most closely represent the workflow of the project prototype.

Figure 2 – A Kodak SCANZA Digital Film Scanner

Kodak (n.d.) lists the specification as a 14MP JPG image or 22MP interpolated image and although the system can scan 8mm and Super8 film, the product is completely manually operated and has no options for mounting film reels.

Once you have the individual frames converted to JPG files, these can then be compiled as a sequence within Adobe Premiere Pro. As this initial scanning process and compilation is all handled by the end user, the overall quality of the output here will be extremely dependant on the skill of the user to accurately line up the film within the scanner and then also their knowledge and skill in programmes like Adobe Premiere Pro.

One benefit here is cost, as the device typically sells for £149.99. Although there is no mention in speed, the comparison to use with a device like this would be cost to performance output over the time to carefully manually feed a film reel through the device.

c. Commercial scanning services (High Street)

One main alternative to the “do it yourself” approach of the previous detailed products would be to take the film reels along to your local High Street store and use their services for digital conversion.

This is a surprisingly common option that is available in both supermarkets and specialist photography stores.

ASDA (n.d.) simply charge by feet of film length (see appendices for full chart), with the deliverable provided on DVD or USB 2. Their maximum film length for processing is 1600ft, however it is worth noting that the price does not change regardless of length, resulting in a cost of £0.24 per foot of film (if transferring to DVD).

One key point to note here is that although no detail is provided by ASDA on the product page, with the highlighted end product being on DVD, this potentially implies a standard definition output. This is further reinforced by the flat £13 fee for transfer to USB, that a larger storage device is not required as you increase the volume of film indicates that a smaller file size standard definition output is being used.

Jessops (n.d.) being a photo specialist store by comparison is a simpler proposition for the consumer, namely they charge a flat fee of £13 per 50ft of film 3, resulting in a cost of £0.26 per foot of film (if transferring to DVD). As with the previous service, although Jessops do not specify, the assumption is that their service provides a standard definition output with being primarily advertised as a convert to DVD service.

As having the most established specialist brand name, Kodak Express London (n.d.) provides a more customisable purchase system for the consumer. Based on their more aggressive pricing 4, it is clear that they have a much larger (and faster) system available for their site, which allows a much cheaper cost to the consumer. As a result, although this is still a transfer to DVD service, they charge £0.15 per foot of film.

d. Commercial scanning services (specialist online)

As with the high street services, there are a number of online specialist order services available, some of which provide significantly higher resolution outputs.

To provide initial cost comparison, C2DT (n.d.) provides a standard definition DVD outputting at 480i. Because of this, the service can be used in comparison to high street solutions for the base service they provide. As such, the C2DT basic service is at a cost of £0.20 per foot of film 5.

When compared to other commercial services, that provided by Alive Studio (n.d.) is the only quotable service over standard definition, providing High Definition (1920×1080) and Ultra HD 4K (3840×2160) scanning options 6. These services cost either £95 for 200ft at HD or £142.50 for 4K for a 200ft reel, resulting in a comparison price of £0.48 per foot for HD and £0.71 per foot for 4K.

4.1.2 Justification for prototype telecine film scanning format

With the 8mm and Super 8mm formats being the home user standards, a decision was made to focus on the prototype being capable of Super 8mm scanning.

This is firstly down to the visible frame being larger, allowing for more of the scan to be analysed, while also being able to hold more visual detail over the smaller 8mm frame.

There is also far more data available for Super 8mm being the more popular choice of the two similar film formats since Super 8mm was introduced. Although White(1962) discusses how the 8mm format has become the home movie standard thanks to the balance of quality and cost, it is equally crucial that there was parity for both home and educational uses thanks to the low cost of reels, cameras and projectors. This is something which Cappel (1969) summarises for educational use exceptionally well yet it is generally the case that home users with families are far more willing to use the equipment being used to educate their children.

4.1.3 Justification for prototype telecine scanning quality

Generally, when it comes to discussions about developing technology products, the rule is the more cutting edge the product, the higher the cost and the greater potential for larger financial gains.

“The industrial revolution in the new century is, in essence, a scientific and technological revolution, and breaking through the cutting edge is a shortcut to the building of an economic giant,” Marshal Kim Jong Un (2013) succinctly illustrates this point as a leader of a nation which is under a technology embargo from the UN (Security Council Resolutions 1695 (2006), 1718 (2006), and 1874 (2009) 7).

Although cine film is (relatively) old technology in 2020 8, the mediums being used for viewing modern media (Television, PC Monitors and Mobile Devices) are still progressing along the path to attain higher and higher resolutions as technology is further advanced. As such, conversion technology is, although a more niche service, in higher demand as users want to watch legacy content on modern digital systems. Although this was a major consideration when deciding upon end output, there is also an aspect of comparison and cost of production.

The aim of the project is to create a cost-effective telecine prototype. A decision was made to provide a standard definition output. This will be similar to the commercial products as listed in section 4.1.1 and allow a discussion on if the project prototype is more cost effective (cheaper) than some of these examples.

Based on the calculations undertaken in 4.1 the resolution of the entire frame will be ideally 1075 x 546 to reduce the potential signal interference from the film itself during the scanning process. As a standard definition output is desired, consideration needs to be made to ITU-R BT.601 standard in regard to resolution outcome. A decision was made to provide a final digital output frame of 720w by 576h, in compliance with this standard (assuming 625-line, 50 field/s PAL UK encoding parameters).

4.2 Hardware and Software considerations

It is because of the above requirements for standard definition output that cutting-edge hardware and software are not a requirement. As digital technology and computing hardware have developed, the requirements needed for converting analogue colour video to standard definition digital video have dropped in recent years.

4.2.1 Single Board Computers

Single board computers, although not new in concept, are now viable as a base hardware choice for the project, thanks to mobile phones pushing demand for high performing small board processors. This results in a significant number of options being available on the market at low price points.

TABLE 1 – SINGLE BOARD COMPUTER SPECIFICATION COMPARISON [9]

The common negative feedback with single board computers is that due to size being their key selling factor, any slight increase in performance significantly increases the costs. For example, using the above table to compare Raspberry Pi 3 Model B+ to the Odroid C2 is a cost increase of 87.4% for a 0.1GHz CPU clock increase and 1GB additional RAM (although using fast DDR3).

If you consider some of the more expensive boards in Table 1, for example the Pine H64 or Odroid C2, they both use the same CPU as Raspberry Pi 3 Model B+, yet they have higher amounts of RAM installed, resulting in better multi-tasking performance.

1.4GHz vs 1.5GHz is not a noticeable difference out of the box, especially as the CPU can be overclocked, so Raspberry Pi CPU can perform identically if provided with enough cooling to compensate for the higher clock.

Pine H64 is primarily more expensive because of its 3GB RAM, which again makes it superior for multi-tasking and as mentioned above, its lower clock speed on the CPU is not considered a negative due to manual overclocking capability.

 

Odroid C2 is the only board that improves upon the Raspberry Pi 3 Model B+ specifications out of the box, with 0.1GHz higher clock and an additional 1GB RAM. Paying 87.4% more cost is not cost efficient, especially as that £33.21 difference can comfortably be put towards other components.

A true alternative here if using a larger budget would potentially be to try to overclock the A53 CPU chip on the Pine H64 closer to that of the Pi 3. The 58% additional cost to the Pine H64 is more viable with the larger 3GB LPDDR3. However, the main issue for a prototype build is as the Pine H64 is not as popular in the consumer market there is less in the way of additional hardware and software support. The end result is that this board would probably be worth the cost increase to a product designer/engineer who is doing every aspect of the build to their custom specifications, rather than wanting a plug-and-play style setup for this project prototype.

4.2.2 Justification for not considering Arduino

Although there is the possibility of running a version of the telecine with an Arduino, for ease of testing and multiple test operations, the decision was made to remove the Arduino from initial contention due to its need to be attached to additional computer hardware in order to load the code for the microprocessor. It is for this lack of standalone operating system options with this device that it was not considered for this initial prototype. More information regarding this can be found in the discussion for potential future works.

4.2.3 Justification for Raspberry Pi 3 Model B+

A decision was made to use the Raspberry Pi 3 Model B+ for the prototype telecine. The Raspberry Pi 3 B+ was chosen as although it has less RAM than the more expensive models (resulting in it being less multi-tasking capable), it crucially has the same CPU as more expensive models.

32-bit models of single board computers are often used to really cut costs, however most software for boards needs 64-bit for improved bandwidth and compatibility with other hardware. It is because of this they are removed from being eligible for final board choice.

The Raspberry Pi 3 B+ is a 64-bit, quad core CPU. This allows for far more compute power for single tasks. The main negative of the board is due to it only having 1GB LPDDR2 RAM available. Because of this it is not designed for multi-tasking with large pieces of software and files, something which will be elaborated upon during the method section.

There is also a level of brand recognition with Raspberry Pi, which results in an enthusiastic consumer market for custom parts, design guides and standard add-on hardware.

Additionally, as the ARM Cortex-A53 is an unlocked CPU (i.e. is capable of being both under and overclocked from the manufacturers standard clock speed), the main consideration will be ensuring adequate cooling to prevent automatic underclocking due to overheating.

Finally, the Raspberry Pi 4 was not considered due to its June 2019 release date, resulting in limited software compatibility and development time when compared to the Pi 3 B+ which was released in March 2018, making it a more mature platform to work with on the prototype project.

4.2.4 Raspberry Pi Operating Systems

For software, the main initial consideration is Operating System (OS). As the inclusion of an operating system as a whole was the main reason to discount the Arduino, it is therefore important to analyse which are the available builds for Operating Systems as well as any unique features over each other.

As cost is a critical factor, one further advantage of Raspberry Pi is the majority of its operating systems are Linux based, making them free to download and use.

There are several open source solutions to choose from, which are summarised below.

a. Raspbian

The Raspberry Pi Foundation (n.d.) defines Raspbian as, “the Foundation’s official supported operating system.” With Raspbian being an open source OS that is maintained by the team responsible for the construction of the main board, you get a number of advantages over traditional open-source software.

Firstly, although the OS was based on Debian (a Linux-based OS), it has evolved over time since the first Raspberry Pi board to now be a standalone, stable, and optimised operating system for the Raspberry Pi board.

In regard to visualisation, Raspbian uses PIXEL desktop, which is explained by its developer as meaning, “Pi Improved Xwindows Environment, Lightweight” (Long 2016). This adds additional functionality to the Raspberry Pi, like being able to boot the device over a network via the ethernet port as well as full remote access.

Due to there being multiple Raspberry Pi models, a board without an operating system could suffer from compatibility issues with software. However, so long as the software has been designed to run within the main Raspbian environment, it will still function (although performance may still vary from model to model).

The main negative with Raspbian is building in that compatibility into the operating system is often slow to be provided. It was for this reason with many operating systems which contributed to not using the Raspberry Pi 4, as stated in the previous section.

 

b. DietPi

The next step in the operating systems should be DietPi, with it being another OS based off the Debian OS, yet specifically stripped out of features, with the aim of minimal footprint and fast loading (DietPi, n.d.). What is unique about DietPi is although it is a smaller install footprint, it still comes with a functioning graphical user interface (GUI).

The main negative for DietPi is also its greatest strength. Due to its stripped-out nature, a lot of normally available Linux-based software simply do not work with it. Equally, you have to be more knowledgeable with the Raspberry Pi hardware capabilities as to cut down on that install footprint, you have to completely configure the OS manually with a text configuration file.

c. Minibian

Unlike the above OS, Minibian is a branch taken specifically off Raspbian OS. Toggio (2016) states, “The main focus is to have a small, updated and stable distribution that is fully compatible with official Raspbian image, without GUI and unneeded tools.”

What this build results in is a functioning Linux distribution running purely from the command line. This makes the OS extremely popular with experienced coders who do not need a GUI to run the code and programs they need on the Raspberry Pi. Due to its stripped out nature, its biggest strength is its notably fast boot times, with only needing to load into the command line while providing a level of compatibility with the main Raspbian branch, which in theory opens you up to the same software (however, software that requires a GUI will not function).

This then becomes its main disadvantage as it is not beginner user friendly, requiring an advanced user to code and troubleshoot any issues.

d. Ubuntu Mate

Perhaps the best alternative to Raspbian as a fully-fledged OS, Ubuntu Mate takes advantage of being the most popular mainstream Linux distribution (Linux Org, 2017). Although never originally designed for Raspberry Pi, the combined popularity of the OS and the Pi resulted in a branch build being created specifically for it. As such, although most packages are not designed for the Raspberry Pi, with Ubuntu being the most downloaded distribution of Linux makes it well supported as a whole.

Despite is overall popularity, that lack of being a specifically designed distribution does have its negatives. Firstly, there is generally less system stability when compared to other operating systems, generally as a result of main Ubuntu updates being pushed to all branches with hardware compatibility not being tested fully for Raspberry Pi. This is then compounded by the “full” support only being a 32-bit branch, with the 64-bit version being experimental, thus limiting software compatibility and capability with the 64-bit capable Raspberry Pi.

Finally, as this is a small branch of a larger OS, the community support is smaller for this than more popular builds (like Raspbian), so any fixes and updates are generally slower to be pushed out for the two Raspberry Pi branch builds.

e. Windows IoT Core

Unlike the previous builds listed, Windows IoT (Internet of Things) Core is a free distribution branch of Windows 10, designed for Internet of Things style projects. What is crucial to this build is that its design, backend systems and software are all the same as the main branch of Windows 10.

Microsoft (2018) states, “Windows 10 IoT Core is a version of Windows 10 that is optimized for smaller devices with or without a display that run on both ARM and x86/x64 devices.”

This includes access to unique services like Microsoft Azure, allowing natively running cloud computing performance to be accessed.

However, as this is a version of Windows 10 similar to what they provide on entry level tablets, there are limitations to what you can run on the build. Namely, any software a user might want to install all has to be available (or built) under the Unified Windows Platform (UWP) app system, which means even advanced users are limited by coding within Visual Studio.

Equally, the version of Windows 10 IoT Core that is for the Raspberry Pi 3 Model B+ is only part of a Technical Preview Build 17661.

Microsoft (n.d.) states:

This release for the Raspberry Pi 3B+ is an unsupported technical preview. Limited validation and enablement has been completed. For a better evaluation experience and for any commercial products, please use the Raspberry Pi 3B or other devices with supported Intel, Qualcomm, or NXP SoCs. 10

As such, although this is a unique option as an OS, it is not without issues. Finally, although access to cloud computing is useful, the general use case for this technology in small devices is for automation, trackers, and machine learning.

f. Chromium OS

Similar to Microsoft’s offering above, Chromium OS is created by Google for its larger than tablet devices like Chromebooks.

“Chromium OS is an open-source project that aims to build an operating system that provides a fast, simple, and more secure computing experience for people who spend most of their time on the web”. Google (n.d.) states in regard to the build philosophy behind Chromium OS.

 

Chromium OS is now what would be considered a mature platform, with Google working on developing the platform for years since its original release. Like Windows IoT Core, it takes advantage of cloud computing, just in a different way as Google use their own cloud servers to host the main OS software, resulting in a very fast boot as not much data is needed on the base install itself to boot the system. It is also because of this that the OS has a footprint more comparable to small Linux-based distributions.

However, that cloud computing performance comes at a cost to the user. Google run Chromium OS as a closed system, you only are given access to Google’s suite of cloud-based software. Equally, although it is made available to install by OEM partners, it is a very complicated process to install the OS individually.

4.2.5 Justification for Raspbian as Operating System

A decision was made to use Raspbian as the base Operating System. This was based upon this being the most supported OS for the Raspberry Pi 3 Model B+, as despite its developers being generally slow to provide meaningful updates to the build, at the least it will be stable and 100% compatible with the board.

As the prototype will not be needing the additional feature set of the OS (outside of its functioning GUI and terminal) the slower updates are not seen as a negative to the project prototype. This is also as there will need to be additional software either tested or created for the prototype itself to allow it to function according to specifications, which Raspbian can provide on its current versions.

4.2.6 Projector Bulb Considerations

When searching for suitable products to use as a projector style lamp bulb, it is important to note that normal industry standards, like the more recent SMPTE ST 196:2003, tend to refer to the commercial film formats (16-, 35-, and 70mm motion picture prints). This is mainly down to SMPTE general policy to not “step in” in regard to products being made by competing companies.

“The SMPTE is expected to guide the establishment of standards; however, the Society cannot become involved in the comparative rating of competitive items.” Zavada, R.J. (1970) explains, which is why in the case of Super 8mm film, the vast majority of the available standards focus on the design and manufacture of the film itself, rather than the various industrial commercial projector products of the time.

However, one critical point is that, although there were a number of manufacturers of film stock, this conformity to industry standards allows a level of deductive reasoning for smaller Super 8mm film cuts as it is all based upon the same larger raw stock before it is cut down to size.

a. Light Intensity

In the case of raw stock, the standards are a valuable tool in correlating the information for Super 8mm film. Manufacturing standards result in easier identification of the stock, as seen in SMPTE ST 184:1998 and also with SMPTE 37:1994 which shows how the different cores are used for the film sizes from stock, with Dimension B in the standard showing how different film sizes can be obtained from the same raw stock cores before cutting. That manufacturing control can further be seen with SMPTE ST 75:1994 which details where perforations are cut to on the main raw stock in regard to single-row and multiple-row rolls and how that impacts winding of the rolls.

It is with this understanding that a hypothesis can be made for Super 8mm film lighting requirements, based on a number of these existing standards. Firstly, it is important to understand that telecine systems function in a similar way to that of projector systems in that film is fed off a film reel through a series of sprockets and runners to a film gate/deck where a suitably bright light is shone onto the film so that the image is cast, in this instance, through a lens onto a camera sensor.

There are recommended practices under SMPTE which exist for projectors in multiple location types and conditions.

SMPTE RP 12:1997 states that for outdoor use (a drive-in theatre) a minimum luminance of 7fL (24 cd/m2) with a peak of 16fL ± 2fL (55 cd/m2 ± 7 cd/m2), with SMPTE RP 98:1995 it states for indoor use the luminance range will be between 12fL and 22fL.

A foot-lambert (fL) is a United States unit of luminance, and 1fL is equal to 3.426 candela per square meter (cd/m2). It is the candela per square meter that is important when investigating optimal light intensity for bulbs, as these are often cited with having a peak candela value by manufacturers (cd measured at distances from source).

It is this relative intensity at distance which shall be utilised in investigating an optimum bulb for the project prototype. As the higher values for fL under SMPTE recommended practices are 16fL and 22fL, a minimum value of 55 – 75 cd/m2 light bulb will need to be sourced. Crucially, this range can now be used regardless of what value a manufacturer uses as 55 – 75 cd/m2 = 55-75 lux = 55-75 cd (at 1m). For other measured distances, this can be re-calculated using the following conversion formulae (Riemersma, 2019).

Equation 1 – Simple conversion calculation from Candela to Lux

Candela (Iv) and Lux (Ev) can be converted, given a measuring distance (D) in meters.

This relationship is key in understanding relative strength and intensity of light bulbs over distance, in addition to the surface area of what is being illuminated, which is why full projector bulbs need significantly more power to provide the required light projected from the projection room to the screen.

When taken to the modern extremes, with an IMAX projector it utilises a 15,000-watt liquid-cooled, short-arc xenon lamp with an output of approximately 600,000 lumens (Lamps & Tubes, n.d.) to sufficiently illuminate a screen of that size. Additionally, two of these 2K projectors are used per installation, resulting in a significant maximum output. A standard IMAX screen is 22m x 16.1m, yet as per individual installation, can be set to a larger display surface, with IMAX Melbourne the current largest at 32m x 23m (IMAX, n.d.).

This relationship of light intensity of distance works in the favour of telecine design as the system components are typically within a few centimetres from light source to film, and with Super 8mm film to sensor a smaller area to illuminate.

b. Bulb Colour

One of the more interesting aspects of bulbs is the colour of light they produce. This is important when looking at film and cameras, as these would be balanced to a certain colour temperature, expressed as a value in Kelvin.

Knight, Ray E. (1968) states that film cameras in the studio are typically balanced to 2,900°K with Type B film emulsion balanced for 3200 K (±400K) and Type A film emulsion balanced for 3,400 K (±400K). This puts the bulbs in line with what would typically be expected when using tungsten or more powerful incandescent lamp bulbs, many of which are used in modern telecine systems.

Color Rendering Index (CRI), Correlated Color Temperature (CCT) and Television Lighting Consistency Index (TLCI) are all measures for the colour produced by a bulb. However, how they determine that bulb performance differs from test to test.

CRI determines how faithfully a bulb can provide colour to the human eye from a reference chart when in comparison with an ideal reference light source. A CRI of 100 is a perfect score and is usually only seen when using the sun as the reference source to initially calibrate a system. CCT then provides a value to the apparent colour temperature a bulb is providing. As such, the two values need to be used together to accurately understand the performance of the bulb.

High end bulbs for art restoration and installations, as well as film and television tend to have a CRI of 95 and above. Good quality home consumer bulbs typically have a CRI of 90.

4.2.7 Justification for LED Bulb

To best provide the best possible ‘middle ground’ for colour temperature (based on available above data) a decision was made to purchase a bulb of 3000K (warm white). This is a CCT value of 930.

This would be rather than purchasing a 3400K bulb, which although still classed by many standard manufacturers as a warm white, it would result in a temperature similar to that of a tungsten bulb. It also does not go too warm, like if a 2680K 40W incandescent light bulb was used.

To allow for high enough colour reproduction accuracy, a decision was made to purchase a bulb of at least CRI 95, to allow for very accurate colour reproduction and correction in post when performing a white balance.

Based on the main aim of creating a cost-effective telecine, a decision was made to purchase a Philips MASTER LED ExpertColor LED ExpertColor 5.5-50W GU10 930 36D 11. According to the Philips (2020) datasheet, the bulb has a CRI of 97, a CCT of 930 (3000K colour temperature) and a luminous intensity of 800 cd. Having an in-expensive mainstream bulb with a CRI of 97 is excellent and would only be beaten by significantly more expensive industrial level bulbs. Plus a CCT of 930 and candela of 800 cd, which is sufficiently intense enough as a light source at the chosen colour temperature to use at a distance of 5cm from the front of the LED glass to the Super 8mm film passing through the gate and projecting into the camera.

4.2.8 Justification for a Raspberry Pi Camera

Unlike other sections where there is more of a decision to be made based on different criteria, due to general incompatibility with USB cameras and the Raspberry Pi, the only viable choice here is a Raspberry Pi camera.

There are currently 2 versions of the camera available: a standard camera module and an infrared camera module. As the prototype will not require infrared shooting, a decision was made to purchase the standard camera module (currently v2). The key data about the module is that it is using an 8-megapixel Sony IMX219 image sensor (CPC, n.d.) plus with it being a native camera it will be fully supported for use with the Raspberry Pi 3 B+.

4.2.9 Power Delivery

Although the Raspberry Pi can run under its own USB power source, the number of additional devices required for the telecine prototype to operate resulted in the need for multiple power sources, drivers, and motors for the telecine to operate.

In order to decide what is required, an investigation into available motors for price was first undertaken, as well as an investigation into popular motor choices for small size projects. Although there are multiple partially public designs published (etiennecollomb, 2018. Kinograph, n.d. Alexamder, 2015 and jphfilm, 2018.) the one part which is quite consistent is the motor used for the projects.

From this, a decision was made to utilise 4 of the NEMA17 motors (model: 17HD34008-22B) for the project 12. Although there are many similar NEMA stepper motors available, this particular model was chosen for its modest power requirements relative to its size and torque output. With being a 2-phase stepper motor, these have some unique power requirements, based on peak power drawn if a sudden change in phase is required.

With requiring 1.5A and 4.8V each to drive these motors, driver boards with sufficient power output and performance were required to regulate both the power delivery and phase control to the motors.

An investigation into driver boards was undertaken, with a focus around popular boards for driving both stepper motors and multiple DC motors. What quickly became clear on sources such as Electronics Hub (2018), Last Minute Engineers (n.d.), Tronixlabs (n.d.), as well as the Raspberry Pi Community Board (2016), there was a commonly available driver which designers typically chose for various builds. Based on the recommended drivers used, a decision was made to use L298N Module H Bridge Driver Board Module 13.

This driver board provides excellent power delivery (DC 5V – 35V) with a peak current of 2A, they are small which allows for more discrete wiring, and the boards come with mounting holes allowing for easy adaptation onto the chassis.

With knowing the capabilities of both the motor and driver boards (as well as their respective power requirements), the requirements for additional power units became clear. With the driver board needing up to 5V 1A to power its own circuitry, as well as each motor needing up to 5V 1.5A, the decision was made to purchase 4 transformers, each capable of providing 12V DC 5A supply from mains. Although these can be purchased and built into larger containers for safety, the initial purchase was only for open frame transformers from Alibaba (n.d.).

4.2.10 Sourcing Additional Super 8mm Components

One main aspect of creating a working telecine prototype is the general availability of suitable components to create the project.

The general consumer market has moved on through multiple generations of different hardware since Super 8mm was at its most popular, which traditionally would have resulted in a number of practical cost limitations to the build.

However, it is the decrease in price of 3D printing which will be utilised for the hard to source or expensive components.

For this build the list of 3D printed parts will include:

1. Film Table – a base plate for a Super 8mm film reel to rest on

2. Super 8mm Table adapter – to allow for future potential adaptability, the inner core of the film table will be small enough for standard 8mm reels, with the adapter allowing Super 8mm compatibility.

3. Super 8mm Uptake Reel – designed in two parts to allow for film to be placed into the reel, then spooled up as part of the telecine process

4. Rollers and Sprockets – designed to move the film smoothly and easily from the reel through the main gate and then onto the uptake reel

5. Film Gate and Table – designed in two parts, this allows for the film to be placed securely and at a fixed distance to the camera, to allow for focussing

6. Motor Mount Adapters – with multiple motors needed in the build, mounts will be required to connect them to the main chassis of the design

4.2.11 Justification for 3D Printed Parts

As mentioned above, the main focus on these 6 main component parts to be 3D printed is cost. However, it also allows for a level of easy compatibility with other components, especially with being able to 3D print mounting house to suit the chassis parts. Although many of these could be sourced off eBay and other online stores, a decision was made whereby having these parts printed by LJMU engineer technicians would ensure correct production, quality control and have the parts in pristine condition.

For all of the following prints, standard PLA (poly lactic acid) plastic filament was used by the technicians and all decisions on the product print settings were handled by them. For this project, they were supplied with the 3D models to print.

As per SMPTE RP 55:1997 specifications, a Super 8mm sprocket design was sourced and then checked against the standard to be fit for purpose, which included the design to be applied onto a standard D-Shaped 5mm motor shaft, with a standard roller model adapted for purpose 14. This would allow for correct film transport through the system thanks to the standardised sprocket size, as well as the sprocket being able to be mounted to a standard motor shaft, which reduces costs.

As per SMPTE ST 212:1995 and SMPTE 160:1995 specifications, a Super 8mm reel design was sourced and then checked against the standards to be used as the main take-up spool for the film 15. Although these reels exist for purchase, as none of these are originals in pristine condition, this allowed for a guaranteed design and dimensions of the product being used for the prototype.

As per SMPTE RP 50:1995 specifications, the adapter for the simple film table design was sourced and checked against the standards to ensure that it was fit for purpose and would fit both the existing film reel with footage, as well as the 3D printed reel 16. As with the above justification, although it is possible to get these tables from projectors, due to the unknown nature of these components used, having a new component printed to specification was of greater benefit in this instance.

With the motors being chosen as NEMA17 model 17HD34008-22B, a mounting design can now be easily fabricated for later use. The width and length of the top of the motor are both 42.3mm, with the 4 mounting holes spaced 31mm apart 12. The advantage here was that these are common stepper motors as well as one of the most common sizes of that motor, as such the decision was made to download and utilise one of the many open source motor mounts available 17. This also allows for a significant reduction in cost when comparing the 3D printed part to accessories being sold for the chosen main chassis material, MakerBeam who sells what is the same bracket for higher cost (MakerBeam, n.d.).

4.2.12 Justification for Main Chassis

Due to the size and complexity of the build, similar to the above smaller components, the original plan was to create the prototype using either simple 3D printed beams to mount the various components, or a plain flat plastic surface which could have parts screwed into for testing.

Although wood is often an easy option for prototyping designs, due to the use of film and camera equipment this would be an inappropriate material as risk of damage from sawdust was seen to be too high.

As mentioned in the Acknowledgements, a MakerBeam Starter Kit (MakerBeam, n.d.) was made available to use during the prototype build and as such a decision was made to utilise this superior building material when prototyping the main chassis for the telecine.

Unlike the 3D printed beams or flat plastic surface, MakerBeam would allow for a fully built-up prototype design and despite it being made from metal, the easy brackets and hex bolt design allows for quick changes to the shape if required.

5. Method

The general philosophy of methodology used was a criteria-based approach. This decision was based upon using mostly un-used (and un-tested) hardware and software components and as such analysis would need to be undertaken to confirm justification.

5.1 Raspberry Pi 3 Model B+ Performance

With the Raspberry Pi being an open developer platform, there are many open-source software tools which allow for testing of the performance.

For the telecine prototype, the critical performance indicators are the CPU (Central Processing Unit) and iGPU (Integrated Graphics Processing Unit) overclocking stability, the sustained cooling performance, and the read/write performance of the SD card.

These indicators would serve to have a testable and quantifiable impact on how the system as a whole would perform and as such would need to be investigated before additional components are attached to the board.

5.1.1 CPU/iGPU Overclocking

What is firstly important to note is that any manual overclocking of the Raspberry Pi is not covered by the manufacturer’s warranty and is done at the risk of the user. Due to the nature of how the Raspberry Pi has been designed, excessive overclocking can destroy the CPU due to either overvoltage (drawing too much power in attempt to overclock), or overheating (power draw results in too much heat being generated on the CPU as a by-product and then being unable to be dissipated fast enough). If either issue occurs, permanent damage can occur to the Raspberry Pi itself.

The Raspberry Pi 3B+ has a Cortex-A53 quad-core processor, with each core having its own L1 memory system and a single shared L2 cache. As mentioned in Table 1, the Raspberry Pi team advertise this as being a 1.4GHz processor, however a closer look at the CPU provides more insight into functionality. It has a base (idle) speed of 600MHz and under load will turbo (boost) to the advertised 1.4GHz speed. Based on the design of the Raspberry Pi 3B+, the CPU will begin to throttle (reduce clock speed) at reaching 70°C to 1.2GHz, then if the CPU continues to heat up, full throttling will occur to protect the system at 82°C.

As the desired option is to have no throttling of the CPU at all (and as such the CPU is running at its full clock speed at all time). A number of tests with different cooling solutions (as well as no cooling) were undertaken.

To overclock the Raspberry Pi, adjustment to the main boot config file is required, which can be accessed through the terminal using the command, “sudo nano /boot/config.txt”. It is done this way as ‘sudo’ will give administrator access which is needed for altering this file. From within that file, the following lines can be altered to provide an overclock.

arm_freq=1500

gpu_freq=500

over_voltage=4

The first command allows user adjustment to CPU frequency and the over_voltage command allows for more power to be supplied to the CPU so that it remains stable. That has a simple value of 0-6, with each value over 0 adding an additional 0.025V to the CPU power allocation. Finally, the GPU freq command allows for overclocking of the GPU core.

The system performance benchmark SysBench (2007) was utilised as a simple method to “stress” the CPU. The software does this test by making the CPU verify prime numbers by going through a set number of divisions and only ending the test when the result is zero. As SysBench is not installed as standard, it is installed from within the terminal by using the command, “sudo apt-get install sysbench”.

Results of temperature for this test are created using the internal sensor data and then along with the script to run SysBench display the information directly within the terminal for review18.

The first set of test results were taken with no cooling solution to see standard cooling performance of the CPU over time. This is crucial as it allows for the CPU die to have full heat saturation, allowing for an accurate maximum temperature to be measured.

TABLE 2 – TEMPERATURE READINGS (°C) DISPLAYED DURING CPU OVERCLOCK TESTS

Based on the results shown above, it is clear that the Raspberry Pi 3B+ official clock speed of 1.4GHz was set in an attempt to not hit its soft temperature limit of 70°C. Once clocks of 1.5GHz or 1.55GHz were applied, the higher limit of 80°C was seen in the larger 50000 prime numbers test. However, this was shown to highlight earlier data that the system can run at these higher clock speeds, yet a cooling solution is required if this is to be maintained during operation.

As an additional, the reason there are no 1.6GHz test results is because the Raspberry Pi would not boot regardless of cooling. Resulting in the 1.55GHz being the effective maximum clock of this CPU.

A decision was made to use a main core clock of 1.5GHz and GPU core clock of 500MHz.

The 1.5GHz will be used as the main ARM clock as the available overhead instead can be used to provide an overclock to the Broadcom VideoCore IV GPU core of 500MHz. Under normal circumstances, the GPU core will float up as high as 600MHz based on power availability, yet setting this to 500MHz is useful as this memory is shared between GPU and

system RAM, so it gives us a nice combined improvement to multiple parts of the system.

TABLE 3 – TEMPERATURE READINGS (°C) DISPLAYED DURING COOLING SOLUTION TESTS

The above table shows the temperature results of the same script running the decided 1.5GHz overclock using different CPU cooling solutions.

The “small heatsink” is a generic small aluminium or copper block heatsink which are often supplied for free with Raspberry Pi cases and mounting solutions, however, can also be purchased as a small kit for £1.74 (Aokin, n.d. a).

The dual fan and heatsink are a popular “next step” for enthusiasts overclocking the Pi and is comprised of a combined heatsink and dual fan. These kits can be purchased for £4.74 (Aokin, n.d. b).

Finally, the Noctua NF-A4x20 5V fan is typically used for high air-flow installations within server racks for sub-component cooling and sustained airflow. These can be typically purchased for £14.99 (Noctua, n.d.). Although it is clear that the Noctua fan can provide a higher level of cooling thanks to its increased size and resulting airflow, its higher cost and power draw make this component unsuitable for the final prototype.

A decision was made to use the dual fan and heatsink solution. This is down to its improved thermal results, while being able to sufficiently dissipate the heat, unlike the heatsink by itself which still managed to completely saturate itself at around 70°C. Additionally, the cheaper cost when compared to the Noctua fan, while only being slightly more expensive than the standard heatsink, as well as an excellent improvement over the base test make it the ideal solution.

5.1.2 Boot Solution – USB Data and SD Card Tests

One of the additional board components that can be investigated is the micro SD card slot. As this is handled by an additional onboard controller, the card slot standard read/write speed can be adjusted. This reader works at a 50MHz base frequency. This can then be

increased at integers of the main core clock (for the internal system bus, not the above ARM

overclock) which is 500MHz.

What typically occurs during this overclock is there is a ‘tipping point’ during user tests usually around the 85MHz mark when overclocking where either the micro

SD card used and/or the socket itself will top out due to temperature limits being reached (resulting in automatic under-clocking) or simply the SD card used itself cannot run any faster due to the specific memory modules used in its manufacture.

A discussion over USB as a boot drive has become more popular with Raspberry Pi 4, however with the Raspberry Pi 3 B+ being used in the prototype, testing of standard USB drives were in the range of 21-22MB/s write, 41-42MB/s read. Because of the additional manual coding that would be needed to enable USB booting, a decision was made to remain with the standard Micro SD card boot medium and investigate options for maximum performance.

As the Raspberry 3B+ is a mature platform, there have been extensive tests completed on the standard read/write performance of Micro SD cards before this prototype was designed. Geerling (n.d.) gave the following summary of Pi 3 model B+ data.

FIGURE 3 – READ/WRITE PERFORMANCE OF VARIOUS MICRO SD CARDS FOR RASPBERRY PI 3B+

(GEERLING, N.D.)

A decision was made that, based off this test data and costs of the micro SD cards, that the Samsung EVO Plus 32GB micro SD card would be purchased as the main boot drive.

Once the card was chosen, multiple random read and write tests were undertaken to determine the performance of the card and capability of overclocking. Similar to overclocking the CPU, overclocking the card reader can cause data corruption if the card is not stable.

The SD card reader, as a main board component can be overclocked by adding the following into the standard config file (Geerling, 2016).

dtoverlay=sdhost,overclock_50=100

The configuration file can be accessed through the Raspbian OS terminal using the command, “sudo nano /boot/config.txt”. What this command above tells the Raspberry Pi is to use a clock speed on the reader of 100MHz instead of the default 50MHz. Once this is completed, the Raspberry Pi needs to be rebooted for the change to take effect.

Geerling (2016) also provided the script for installing and then running the same SD card benchmarks used for the base results.

$ curl http://www.nmacleod.com/public/sdbench.sh | sudo bash

The above installs and runs hdparm, and some large file read/write benchmarks 19. This was then used for three runs at different clock speeds to determine the best solution for the purchased card.

TABLE 4 – READ/WRITE PERFORMANCE (MB/s) OF SAMSUNG EVO PLUS 32GB FOR OVERCLOCKED READER

The reason for the multiple read/write tests can be seen with the Write Test 1 data. This is where, due to the initial setup of the benchmark, performance of the Raspberry Pi drops enough to result in a false reading. However, it is included in the data for reference and can consistently be replicated in multiple runs of the benchmark.

Additionally, after 75MHz (benchmark reads the ‘actual’ clock at 71.429MHz) there is a drop-off in speed improvement, relative to the power consumption and heat given off by the reader. Equally, the card refused to boot the Raspberry Pi when set to anything higher than 85MHz, which is why that is the highest clock reported.

A decision was made to set the reader clock to 75MHz. This is due to the above falloff in improvement on the higher speed, as well as additional instability with the higher speeds. The main positive is higher read/write speeds for only a modest clock increase, while not experiencing the increased levels of heat given off by the card reader at higher speeds.

5.2 Raspberry Pi Camera v2 Initial Hardware and Software

Completed early in the prototype development, these tests were conducted to see if there were any obvious defects with the camera and if it could capture data sufficiently well to suit the requirements of the prototype.

These simple tests were not designed around the output of the camera under conditions of using for a telecine, more just experimenting to experience the issues (or lack thereof) of the Raspberry Pi camera out-of-the-box.

This was conducted in a “worse case” scenario with no mounting for the camera and uneven light. This would allow a user to observe how the cameras’ built-in firmware could handle complex scenes, resulting in being able to justify certain functions of the camera when it came to its use in the telecine prototype.

Once the prototype is constructed, these tests (or a form of them) would be re-visited to observe and review its performance under the specific conditions presented by the prototype design.

5.2.1 Test Charts

For the initial tests, two charts were used to check resolution (clarity) performance and luma colour reproduction (dynamic range). The aim was to evaluate initial calibration of the camera from the factory and to compensate if needed. Vertex Video (n.d.) ACCU-CHARTs were used for these tests to allow for comparison to standard outputs.

FIGURE 4 – A GSG-11 ACCU-CHART

The GSG-11 ACCU CHART was utilised as the 11 steps of black-to-white tones allow for a suitable test of the standard colour reproduction from the factory.

FIGURE 5 – UNALTERED CAMERA OUTPUT OF GSG-11 ACCU-CHART

Although obtaining even illumination without a sufficient mounting system for the camera was an issue, it was observed that the camera was producing more magenta colour tones than what should be graduated grey tones if the camera was colour balanced.

FIGURE 6 – A RESOLUTION ACCU-CHART

 

A resolution chart such as the figure above are designed to test a camera frequency response in both the center and corners of the image.

FIGURE 7 – UNALTERED CAMERA OUTPUT OF RESOLUTION ACCU-CHART

As with the previous output, although the colour is now clearly magenta in tone, overall, there is a reasonable frequency response providing good clarity for a standard definition image.

Although the output of the camera could be classed as generally acceptable for the prototype, especially as it is an inexpensive camera, at this point in the testing a decision was made to re-evaluate the camera once the prototype was constructed to allow for further evaluation and calibration.

5.2.2 Camera Stability

As part of the early testing, an investigation into the available hardware to improve stability for the Raspberry Pi camera was undertaken.

The decision was made to source a more fixed mounting solution for the camera with matching film gate/plate for more precise camera shots, due to stability issues which came up in the initial test chart evaluation. Alexamder (2015) had a set of working models for the Raspberry Pi camera and a gate based upon METS (n.d.) design 20. Using existing designs allowed for a significant drop in costs (as like other components that have been 3D printed, this is a cheaper solution than sourcing expensive originals) as well as time in creating the physical components for testing.

As stability is key for using any form of image bracketing or arrays for RAW photography shooting, once these were added into the design a decision was made to re-evaluate these options once the prototype was constructed to allow for potential further calibration of the system to improve output.

5.2.3 Camera Software Comparison

An advantage of the Raspberry Pi and the Raspbian OS is the open-source nature of the hardware and software. Based on popularity of the Raspberry Pi, there is a lot of existing software examples available, in various stages of completion. As such, an investigation was undertaken as to the available options to allow for the best solution for the prototype.

Jphfilm (2018) had the most complete design available, with full colour controls and exposure compensation. However, it was only designed to work with the rest of their project, which used 2 linux based systems to handle a portion of the encoding of images. Additionally, that design was based upon retrofitting an 8mm projector, rather than a standalone prototype like this project. As such, a lot of the features did not apply to this project and were wasted performance allocation to run. Finally, although the designer had since uploaded ports of the design to newer software, it was originally based on older Python 2.7 code and ported to Python 3 almost 3 years ago. As Raspbian has since updated again to Python 3.7 a decision was made to not utilise this software.

Another piece of software available was a general GUI (Graphical User Interface) for programming the Raspberry Pi Camera 21. Created by Billwilliams1952 (2016), this was the most comprehensive software available for the Pi Camera during project development. Although it was impressive, its standalone nature and incompatibility with other pieces of hardware and software (and therefore complexity to integrate into the project), a decision was made to not utilise this software.

As was made clear during this initial testing phase of the prototype, obtaining the right software pre-made was going to be exceptionally complex, as would then the task of retrofitting it for the project. Due to this, a decision was made to create some standalone python code which would suit the purpose of the project.

Due to additional testing needing to be completed on the full camera output, the initial code commit was to have the camera simply taking standard JPG images, with the option to add in RAW image capability once the prototype is built.

5.2.4 Creating Camera Python Code

Having made the decision to create standalone software for the prototype, an investigation was undertaken to see what underlying python library the above software examples were using.

The code library used in this development is called Picamera and this package (Picamera, n.d.) was designed specifically for the Raspberry Pi camera module to allow a purely python-based interface. This is a step up in complexity from the raspistill terminal functions as it does allow for that interface versus a simple command line setup.

The following was the raw code used for the initial camera setup.

from picamera import PiCamera

from time import sleep

camera = PiCamera()

camera.rotation = 180

camera.resolution = (1920, 1080)

camera.framerate = 15

camera.brightness = 70

camera.iso = 100

camera.shutter_speed = camera.exposure_speed

camera.exposure_mode = ‘off’

g = camera.awb_gains

camera.awb_mode = ‘off’

camera.awb_gains = g

camera.start_preview()

for i in range(10):

sleep(3)

camera.capture(‘/home/pi/Raspberry Pi Images/image%s.jpg’ % i)

camera.stop_preview()

To best provide understanding for the code used, what follows is a breakdown of the various sections of this code.

from picamera import PiCamera

from time import sleep

This is importing the Picamera library needed for the camera control, in addition to the and

the sleep timing function.

camera = PiCamera()

camera.rotation = 180

This is firstly setting terms for the rest of code, namely that ‘camera’ is going to be used to define various PiCamera functions, as well as the first camera command to rotate the image 180 degrees on its vertical axis. This decision was made as once the camera is mounted and the LED bulb used to illumate, this is effectively the same functionality of a projector.

As such, the camera sensor in this instance becomes the projector screen, yet it is viewing the film from the back, so a simple rotation needs to be undertaken to end up with correct output files.

camera.resolution = (1920, 1080)

camera.framerate = 15

In order to get the correct resolution and framing for the film itself, the camera maximum resolution output is set to 1920×1080.

Although this will result in a slightly higher resolution than originally planned, due to constraints in how the camera firmware sets resolution, the alternative here would be setting the camera to 1280×720, at which point the prototype would not have sufficient detail of the frame itself.

The ‘ideal’ output was calculated at 1075 x 546 for the complete frame (including perforations), with the resolution stated in ‘4.1.3 Justification for prototype telecine scanning quality’ to be 720 x 576 for just the visible frame area.

Although the camera can shoot in this resolution mode up to 30fps (frames per second), it is set to 15 here to allow for a slower shutter. This is due to the camera functioning in a similar manner to an “always on” video camera rather than a stills camera.

It is important to set this otherwise the camera will float this value based on other set (or automatic) values.

camera.brightness = 70

camera.iso = 100

camera.shutter_speed = camera.exposure_speed

camera.exposure_mode = ‘off’

One of the largest technical limitations of the Raspberry Pi Camera is the default automatic values typically ‘float’ (not a fixed value) from shot to shot, often regardless of the current light levels being shot. Therefore, this addition to the code is an attempt to limit floating values on the PiCamera.

As stated, the Raspberry Pi Camera functions like a video camera and not a stills camera (even though it can take stills). This code is used to control the camera as tightly as possible.

Brightness is set according to preference in testing. During initial code design, this was given a ‘nominal’ value commonly seen for stills on a bright day. Later in development of the prototype, this value will be experimented with by enabling the LED bulb and viewing different values for the best overall clarity and colour reproduction.

ISO is set to the Raspberry Pi Cameras’ base value of ISO based upon its specifications; however, it is worth noting that here is where the camera’s video output design rather than stills starts to become more self-evident. ‘100 ISO’ is a floating value for a better overall

video performance, something which is often typical with camera manufacturers when the aperture of a lens changes slightly as you zoom (commonly referred to ‘breathing’). In the Raspberry Pi Camera, the ‘true’ reading for ISO when set to 100 is anywhere from 100-180

based on what the camera feels it needs for brightness on the image.

This can be seen within the software designed by Billwilliams1952 (2016) highlighted in section 5.2.3, which was the reasoning behind setting brightness and ISO in the prototype custom code. The alternative is to have the camera manually adjusting, which can result in poor optical output.

In addition to this, as the camera lens on the Raspberry Pi is going to be screwed out of its housing to ‘enable’ macro photography by adjusting the flange distance (the distance measured from the back of the lens to the sensor) you fundamentally change the characteristics of the system, making the calculations done by the manufacturer to automatically control these settings invalid.

Unfortunately, this is a limitation of the Raspberry Pi camera. The manufacturer never specifically designed it for this purpose and adjusting the lens in this way is not recommended by the manufacturer, as they glue the lens into the housing to ensure it does not move from its screwed in position.

As with the other commands in this section of code, Auto EV is effectively removed by linking it into camera stills shutter speed and then disabled, allowing simple 1/15s frame rate control from previous code. Although with a stills camera you would be using shutter speed control, the PiCamera performs better with video control input.

g = camera.awb_gains

camera.awb_mode = ‘off’

camera.awb_gains = g

The next section of code is designed as a feedback loop to remove the gain function for automatic white balance, which allows for only manually control and an unmodified output from the sensor, without any automatic correction for RGB values.

 

The main negative of this setup is that this will entail having to correct for any white balance issues during post-production, however with having an unaltered output allows for better overall output.

 

Although you can set white balance, like the above this is set to off so that the camera does not create a floating value for a white balance in software, resulting in poor colour reproduction. As you cannot set a specific Kelvin value (only general terms like ‘cloudy’ ‘sunlight’, ‘indoors’ etc.) this is the better alternative.

With this set to off, the camera just captures a neutral image, allowing for white balance correction to be done more easily in post-production.

camera.start_preview()

for i in range(10):

sleep(3)

camera.capture(‘/home/pi/Raspberry Pi Images/image%s.jpg’ % i)

camera.stop_preview()

The preview command allows you to see what the camera ‘sees’ as it takes its picture. Although you can set conditions for the resolution of the preview, leaving it with no conditions defaults the preview to be Fullscreen (based on Raspbian OS set resolution).

 

The next section of code is a for loop. This is where you can set the number of images the camera needs to take in sequence. This is undertaken by changing the value of range(10), to the number of images a user wants to take. For each image in the sequence the for loop commands first sleep the camera for 3 seconds (in this code example), then takes an image based on the conditions set in the rest of the code and saves it to the location specified.

Once completed, the camera preview closes, allowing normal OS functionality. As this code is not using any GPIO or power systems (due to the Raspberry Pi camera having its own interface on the Raspberry Pi board), there is no additional code clean-up required.

5.2.5 Google Drive Automatic Synchronisation

As mentioned in the above code, it is possible to state where the Raspberry Pi Camera saves the images. By default, this can be set to a folder on the Raspberry Pi.

However, in initial research one issue was the potential size and quantity of the images being taken to generate the sequence from the telecine prototype in post-production. For instance, at 18 frames a second, for a standard 200ft reel of footage (usually around 12 minutes), that is 12960 images. Although you can remove the micro SD card, insert this into a reader and then transfer the images over, this is a time consuming process and due to the potential setup of the prototype, the micro SD card would not be easily accessible. Because of this, a decision was made to create a cloud storage synchronisation solution for the Raspberry Pi.

A small investigation found that a program called Rclone (Craig-Wood, 2014) was being maintained as a command line program (a program which runs in the Raspbian terminal) to setup a cloud based automatic sync for Raspberry Pi.

As specified by the designer, the initial download and install is run using the following command:

curl https://rclone.org/install.sh | sudo bash

This then takes you through an installation process onto the Pi, where multiple additional pieces of data are required, including Google Client ID, Client Secret Code and Verification Code 22.

Although Google caps the access to 10,000 user verification requests daily, in the case of this application with a single user syncing a folder, this limit is not hit (user verification does not equal uploads as multiple uploads can take place each hour). This setup therefore allowing seamless synchronisation of the saved images from the camera to a folder on Google Drive.

5.3 LED Control

With using a LED bulb which required mains power, a way to control the bulb through the Raspberry Pi needed to be designed. A decision was made to use the standard GPIO python code interface for the Raspberry Pi, as well as a 5V relay to create the control system.

In order to understand the requirements for the circuitry, a sketch was created to show the basic connectivity design before construction.

FIGURE 8 – SIMPLE CIRCUIT SKETCH FOR LED RELAY CONTROL

Connecting the LED holder to the main supply and 5V relay, then connecting to the Raspberry Pi 3 Model B+ GPIO pins in this way allows for simple control (on/off of the bulb) to be handled by code running on the Raspberry Pi.

FIGURE 9 – GPIO PIN LAYOUT RASPBERRY PI 3B+ (MATT, 2012)

A 5V and Ground pin, in addition to GPIO21 pin are used for providing power as well as means of control of the relay through the Raspberry Pi. Then on the relay, the 5V is connected to VCC, ground to ground and GPIO21 to IN.

5.3.1 LED Control Python Code

The following code is then used to allow for control of the LED.

import RPi.GPIO as GPIO

from time import sleep

channel = 21

# GPIO setup

GPIO.setmode(GPIO.BCM)

GPIO.setup(channel, GPIO.OUT)

def led_on(pin):

GPIO.output(pin, GPIO.HIGH) # Turn led on

def led_off(pin):

GPIO.output(pin, GPIO.LOW) # Turn led off

if __name__ == ‘__main__’:

try:

led_on(channel)

sleep(35)

led_off(channel)

sleep(5)

GPIO.cleanup()

except KeyboardInterrupt:

GPIO.cleanup()

The code is designed to allow this level of control, using simple power modulation to send a single to the relay to allow power to be turned on and off to the LED. The 5V relay used was a Tongling 5VDC JQC-3FF-S-Z with Interface board by DSIDA (n.d.).

import RPi.GPIO as GPIO

from time import sleep

channel = 21

# GPIO setup

GPIO.setmode(GPIO.BCM)

GPIO.setup(channel, GPIO.OUT)

The initial setup enabled the RPi.GPIO library and simplifies its designation as GPIO. The Sleep function is also imported, with the GPIO pin number defined as channel number.

Although personal preference, the Broadcom SOC channel (BCM) nomenclature was used to ensure standard GPIO pin connectivity was being used and not any of the available power or ground pins.

The next section of the code is defining the function of the GPIO output, in this instance to use pulse modulation to provide a HIGH or LOW signal to the relay, which will in turn enable power to the LED (HIGH) or turn it off (LOW).

def led_on(pin):

GPIO.output(pin, GPIO.HIGH) # Turn led on

def led_off(pin):

GPIO.output(pin, GPIO.LOW) # Turn led off

Next part of the code is a standard Try Except python code, designed to turn on the LED for a length of time based on the sleep duration, or to turn it off either once it gets to the end of the time except if there is a keyboard interrupt used in the terminal.

if __name__ == ‘__main__’:

try:

led_on(channel)

sleep(35)

led_off(channel)

sleep(5)

GPIO.cleanup()

except KeyboardInterrupt:

GPIO.cleanup()

This allows for easily customisable code where the duration the LED remains ON can be changed based on the speed of the rest of the prototype once it is constructed.

Because of this, a decision was made to re-test this code after main construction to test suitability for the prototype.

5.4 Initial Chassis Design

Using the MakerBeam kit provided, the initial chassis to build upon was created from scratch based upon requirements.

FIGURE 10 – INITIAL ROUGH SKETCH FOR TELECINE PROTOTYPE DESIGN

A rough sketch based upon various designs was created in order to be able to create a chassis for the prototype.

The general philosophy of the design was to provide a simple film travel to limit over tightening (and as such potential damage) of the film, while allowing for sufficient layout space for the cabling from the breadboard and LED control to the Raspberry Pi.

FIGURE 11 – ROUGH SKETCH FOR TELECINE PROTOTYPE REVISED DESIGN

When the initial design was investigated, there was concern expressed by LJMU technicians over the temperature of the LED element, as well as its impact on the circuitry housed on the breadboard.

A decision was made to place the LED in a ceramic housing. This housing is typically used when wall mounting mains powered LED to provide maximum heat dissipation. Because of this, the LED control box housing its electronics was moved forwards, resulting in the breadboard being moved back and Raspberry Pi board to be mounted to the back of the chassis. An additional adjustment was made to the angle of the film path in order to improve film travel further23.

 

5.5 Motor Power Delivery

During the initial test phase of the prototype, it became clear that the chosen stepper motor driver modules (L298N Module H Bridge Driver Board) were not delivering sufficient power through to the motors. This is generally indicated by the motor sounding like it is moving but the shaft either only sporadically moving or vibrating without noticeable rotational movement. Unfortunately, due to limited testing options available at this stage of the build, the only part of this which could be checked was that the driver modules were pulling down a significant amount of power (roughly measured at 2.5V and almost 3A on a AstroAI DT132A Multimeter) and as such were unfit for purpose.

A decision was made to replace this part with the StepStick DRV8825 Stepper Motor Driver Carrier Reprap Module. Based on its specifications (Pololu, n.d.) it could not only allow for sufficient power for the motors but had sufficient control to allow for microstepping of the motors.

The 12V power transformers were wired and connected through to the mains using an IET BS 7671 compliant 3-pin UK plug (Figure 12).

FIGURE 12 – WIRING OF THE 12V POWER TRANSFORMER

5.6 Motor Control

With the above change, additional coding and circuitry was needed to be designed in order to correctly function with the rest of the telecine prototype. A simple circuit sketch was completed to understand the required connections for the driver module both from the 12V power supply units, the 4-pin paired motor connector cables, as well as connections needed to the Raspberry Pi GPIO.

FIGURE 13 – SIMPLE CIRCUIT SKETCH FOR STEPPER MOTOR DRIVER CONTROL

From this initial sketch (Figure 13), the design was transferred over to the main installation, taking advantage of using a breadboard to assist in the requirement to wire up 4 of the drivers (1 for each motor). Additionally, the 12V transformers are wired and installed to the driver modules for testing 24.

A 25V, 100uF capacitor was used between the 12V transformer power input and ground to act as a simple circuit break, should power cycle back through the driver module for whatever reason during shutdown, to keep the Raspberry Pi GPIO safe from feedback. After this, the ground connection goes back into a ground on the Pi to complete the circuit. Although not needed at this level of power, a small heatsink is applied to the driver for safety.

One of the useful features of the StepStick DRV8825 Stepper Motor Driver is automated power delivery to the motor. Although the driver receives 12V 5A, the motor only requires 4.8V 1.5A per phase, which allows for overhead for spikes in power delivery requirements (during initial enabling of the motor).

Once installed, the motors worked to specification and testing could begin of the code for motor control.

5.6.1 Motor Control Python Code

Within the prototype design, it is clear that the 4 motors have different roles.

Firstly, there are the two motors attached to the film plates which handle the film reels themselves. Because of this, the strength of the rotation from these motors differs due to the weight of the film and reel. Through investigation with the motors after the electrics were installed, it was found that these motors would not be able to use micro-stepping. This is due to the motors not outputting sufficient torque when not using a full step rotation, resulting in the reel not moving. This was to be expected due to a standard 200ft film reel typically weighing upwards of 200g. It is worth noting that although these motors can move larger objects faster than this prototype, the key difference in this design is the need to be precise with small movements. By comparison, the motors which handle the movement of the sprockets have very little weight to deal with (the 3D printed sprocket, thanks to being hollow weighs less than 1g) and as such could utilise micro-stepping for very smooth and precise movement of the film.

An investigation of stepper motor design is first undertaken to understand how micro-stepping is possible. Nanotec (n.d.) explains the complex motor circuitry, as well as an animation of step operation. The M0, M1 and M2 pins on the stepper motor driver are connected to the Raspberry Pi GPIO pins, which allows the Raspberry Pi to control the stepping mode being used, from full step down to 32 micro-step using a resolution dictionary for the Pi to conform to, based upon the information provided by Nanotec (n.d.). In the case of the NEMA17 motors being used, their rotation angle is 1.8° which results in a full step count of 200 per revolution, or a maximum of 6400 micro-steps in 1/32 mode.

The spool out motor control (the motor handling the most weight at the start of transfer) is used in the code highlighted below. Additional code for the other motors can be found in appendices 25.

 

The first sections of code are designed to allocate the correct GPIO pins to the inputs on the stepper motor driver.

DIR = 12 # Direction GPIO Pin

STEP = 16 # Step GPIO Pin

CW = 1 # Clockwise Rotation

CCW = 0 # Counterclockwise Rotation

SPR = 200 # Steps per Revolution (360 / 1.8)

As with other python code used, the GPIO is then setup, including the nomenclature and function requirements. It is here, for example, where the direction of the motor rotation is set.

GPIO.setmode(GPIO.BCM)

GPIO.setup(DIR, GPIO.OUT)

GPIO.setup(STEP, GPIO.OUT)

GPIO.output(DIR, CW)

The next section of code is designed based on the specification from Nanotec (n.d.). As such, the GPIO pins used to connect M0, M1 and M2 are first defined, with the micro-stepping resolution table created.

MODE = (25, 8, 7) # Microstep Resolution GPIO Pins

GPIO.setup(MODE, GPIO.OUT)

RESOLUTION = {‘Full’: (0, 0, 0),

‘Half’: (1, 0, 0),

‘1/4’: (0, 1, 0),

‘1/8’: (1, 1, 0),

‘1/16’: (0, 0, 1),

‘1/32’: (1, 0, 1)}

GPIO.output(MODE, RESOLUTION[‘Full’])

Due to the increased torque requirement due to the film reel weight, the full step mode is set.

This next section is determining the amount of rotation steps that the motor needs to accomplish. It is set for 10 steps (18° rotation) and a delay condition for the next section of code of 0.5s.

step_count = 10

delay = .500

As with the LED code, a for nest is used which allows for repeated function based upon the number of steps set.

The full python code is as follows.

from time import sleep

import RPi.GPIO as GPIO

DIR = 12 # Direction GPIO Pin

STEP = 16 # Step GPIO Pin

CW = 1 # Clockwise Rotation

CCW = 0 # Counterclockwise Rotation

SPR = 200 # Steps per Revolution (360 / 1.8)

GPIO.setmode(GPIO.BCM)

GPIO.setup(DIR, GPIO.OUT)

GPIO.setup(STEP, GPIO.OUT)

GPIO.output(DIR, CW)

MODE = (25, 8, 7) # Microstep Resolution GPIO Pins

GPIO.setup(MODE, GPIO.OUT)

RESOLUTION = {‘Full’: (0, 0, 0),

‘Half’: (1, 0, 0),

‘1/4’: (0, 1, 0),

‘1/8’: (1, 1, 0),

‘1/16’: (0, 0, 1),

‘1/32’: (1, 0, 1)}

GPIO.output(MODE, RESOLUTION[‘Full’])

step_count = 10

delay = .500

for x in range(step_count):

GPIO.output(STEP, GPIO.HIGH)

sleep(delay)

GPIO.output(STEP, GPIO.LOW)

sleep(delay)

GPIO.cleanup()

5.7 Running Multiple Python Scripts

Stackoverflow (2017) explains the standard operating functions when it comes to running multiple python scripts simultaneously and sequentially. For the telecine prototype, the script order is determined by the film travel direction:

  • Spool Out
  • Rollers In
  • LED
  • Camera
  • Rollers Out
  • Spool Up

 

Due to this, a simple bash script can be created which informs the Raspberry Pi which order to run these scripts in to allow for normal operation.

#!/bin/bash

for i in seq 10; do

python spool_out.py

python rollers_in.py

wait

python ledcontrol.py &

python camera.py &

wait

python rollers_out.py

python spool_up.py

done

Here the spool_out and rollers_in scripts run sequentially, followed by a wait command which tells the Raspberry Pi to wait until these scripts are completed before continuing. This allows for fine timing of the motor controls without needing to worry about the camera trying to take a picture before the film has stopped moving. Additionally, it prevents requiring too much power being sent through the GPIO pins for signals at any one moment.

Next the led_control and camera scripts run simultaneously. This allows for precise timing of the LED to be on just for the duration of the camera function. Again, the Raspberry Pi will wait for these two scripts to finish before continuing.Finally, the rollers_out and spool_up scripts run sequentially to allow for film movement from the camera film gate to the take up spool reel for the film.

This is all contained within a simple loop which in the example is set to run this 10 times. It is within this script where the user edits the number of frames to be scanned before running this script. From that point, the entire process is automatic, allowing for a user free operation. In order for this bash script to run, you need to set the file as an executable.

chmod +x initiate_tele.py

Then run the file using the following command line.

./initiate_tele.py

5.8 Full Chassis Build

A significant amount of the prototype construction and time to build was made significantly simpler than originally planned with the inclusion of the MakerBeam kit. The beams themselves are 10×10 (width and height) T-slot, hollow core threaded for M3 square head bolts. The kit is specified by MakerBeam (n.d.) as comprising of multiple parts, the vast majority of which were used in the prototype construction26.

FIGURE 14 – COMPLETED TELECINE PROTOTYPE WITH MAKERBEAM CHASSIS

The provided MakerBeam kit allowed for the rapid construction of the prototype telecine for full testing. This became especially important as earlier iterations on the design became incompatible and the simple construction methods using the beams and brackets to create a usable framework saved a significant amount of time.

6 Results

As per the specifications stated in ITU-R BT.601, the telecine prototype was able to produce a resolution output of PAL UK format.

A number of issues were experienced due to design, hardware, and software limitations, which will be listed in the sections below, as a review of the technical output of media.

6.1 Stability

When initially creating the image sequence in Adobe Premiere Pro it is very clear that there is an issue with stability. This occurs when the film is not lined up pixel perfect to the previous frame and an effect similar to camera shake can be observed (Cunningham, 2020a).

The sequence is linked into Adobe After Effects to utilise the Motion Tracker tool to stabilise the footage, using the Super 8mm perforation in the middle of the frame as a reference for the movement analysis to adjust the individual frames. This method of using the perforation location as a technique for stabilisation is typically used in more advanced systems, due to the SMPTE ST 154:2003 which states the maximum dimensions of Super 8mm film when used for projecting (which in effect is what a telecine system emulates).

The main reason any movement of the frame is so noticeable is because of the method of film transfer itself. When you consider standard motion blur, this is due to your shutter speed not being fast enough to allow a camera to capture all of the data of a moving object, resulting in the apparent blurring of the object. Although in cinematography, this relationship between frames per second and shutter speed is manipulated for artistic preference, in this instance the film is being captured exactly frame-by-frame, resulting in an effective shutter speed matching frame rate when the image sequence is played back as footage, meaning any movement imperfection is glaringly obvious.

When Adobe After Effects has analysed the sequence and stabilised the frames, the main Adobe Premiere Pro timeline provides the user with feedback on the scale of the adjustment, which equates across the sample as After Effects needing to move the frames on average at 120.4 pixels every second.

 

FIGURE 15 – ADOBE PREMIERE PRO STABILISATION DATA

The closer this value is to zero, the more stable your footage and the less adjustment Premiere Pro needs to do each frame to stabilise the output. This irregularity can also be seen in the corresponding graph, which can be used to further analyse the footage for stability issues.

Smaller spikes on the graph a generally just irregular movement of the film as it gained and lost tension (design flaw), however the largest spikes on the graph at approximately 30% and 70% through the sample were specifically both frame slips.

FIGURE 15 – A SLIPPED FRAME, RESULTING IN LOST DATA

This occurs due to the condition of the film itself and is a result of a damaged perforation on the film, resulting in the sprockets losing grip on the film and the tension of the film being lost.

The loss of stability results in a lower quality visible output due to focussing issues to the middle of the frame. The distances involved are very fine (17mm from film to sensor) so any time the frame is not perfectly centralised, clarity is lost.

However, the Adobe Premiere Pro/After Effects motion tracker does allow for an improvement in overall stabilisation (Cunningham, 2020b).

6.2 Colour

As can be seen above, the film itself has suffered a significant amount of red fading over time. This is due to the dye used in creating the emulsion for film, specifically the blue/cyan dye fading, which typically results in the red colours in the film becoming more pronounced.

Once imported into Adobe Premiere Pro, the sequence can undergo a level of correction in an attempt to resolve this issue.

One advantage with this sample footage is the availability of transferred footage of exceptional quality to use as a reference (Disney, 2020). However, this process is generally the same, regardless of if reference images or footage is available.

From within the ‘Color’ tab of Adobe Premiere Pro, the Waveform RGB scope is used to observe the telecine footage.

FIGURE 16 – WAVEFORM RGB SCOPE COMPARING REFERENCE (DISNEY, 2020) TO TELECINE PROTOTYPE

As can be seen when comparing to the reference on the left, although the blue and green levels appear reasonably close at first glance, the colour fading resulting in the distinct red fade on the film is very evident.

6.2.1 Removing ‘Red Fade’ Effect from Blue/Cyan Film Emulsion Dye

One of the more complex aspects of the correction is removing the excessive red colour from the transfer. This has been caused by the failure of the blue/cyan dye which means there is a loss in blue colour tones. In the example, the complexity of this is self-evident when requiring detail in both the green and blue colours.

When correcting, this results in an image similar to the original, however detail in the frame which was originally blue in colour is lost due to removing the red shift.

If this setup was not designed to use standard tools, methods similar to that used in They Shall Not Grow Old (2018), which used advanced computational learning and precise image maps to replace the footage with the desired colour.

FIGURE 17 – SIDE-BY-SIDE COMPARISON BETWEEN REFERENCE AND TELECINE CORRECTED FOOTAGE

As you are effectively removing a colour tone significantly to bring up the green and blue tones, due to this lack of blue you end up obtaining the majority of the green and red colour tones back from the original, however anything which had blue tones within it is lost.

The correction is handled with only the lumetri curve controls, which allow a user to adjust the RGB colour output across shadows, midtones and highlights individually. However, due to that extreme colour failure of the cyan/blue emulsion, more aggressive correction is not possible as it would result in further colour loss, rather than improvement.

FIGURE 18 – LUMETRI CURVE CORRECTIONS FOR TELECINE SEQUENCE

If no correction was required, then this would show as a straight line (as the luma channel selected in Figure 18 shows).

Although this method was chosen as it most closely resembles that which is often used with a level of automation when scanning film, it is clear in this instance that the footage was badly damaged and would need far more time consuming mapping before being able to begin to get this up to standard for broadcast.

Due to this poor colour performance, if film was being scanned for broadcast use, the overall result is sub-standard to use this simple method. There is far too much variation in luma frame-to-frame (due to irregular values recorded by the Raspberry Pi Camera) and the level of colour degradation is such that a more complex and time consuming correction would need to be done to the footage before it was at an acceptable standard to be classed as fully ITU-R BT.601 compliant.

For private home use, the end result is passably acceptable (Cunningham, 2020c). The telecine prototype output, with these simple corrections, would allow a home user to enjoy the footage again watching at home without the more glaring issue of the red colour shift.

6.3 Project Cost Effectiveness

In order to evaluate cost effectiveness of the prototype, the project costs need to be calculated.

Although this is not technical in nature, the aim of the project was to create a cost-effective telecine prototype and as such comparison of the cost to performance of this project versus the other available products is required. As the prototype is basically functional (as demonstrated above) it can now be compared to the other products in this manner.

6.3.1 Component Cost

What follows is a cost breakdown of all components for the telecine prototype project. Although there were a number of additional setbacks in regard to cost, the final component cost breakdown of £152.19 is significantly cheaper than the other commercially available options.

When considering what would have been additional costs on top (the cost of MakerBeam kit at £75 and the failed stepper motor drivers at £5.79) a complete telecine prototype cost of £232.98 is still £200 cheaper than the cheapest competitor.

When compared to the send away services, despite the time involved in processing the footage personally, a user would only have to transfer between 5-9 200ft reels themselves before the telecine prototype became the more cost-effective solution.

Overall, the aim was well met here in creating a cost effective telecine prototype as this was created for significantly less than the competing products, all-be-it with a decrease in transfer speed.

TABLE 5 – TELECINE COST BREAKDOWN OF COMPONENTS

7 Discussion

Although the overall results of the telecine prototype allowed for the PAL resolution output of 720w by 576h (active area), the prototype did fall short in a number of areas. What follows is a breakdown of the main issues experienced during the development of the prototype, as well as the impact they had on the final product.

7.1 Power Delivery

With little experience in power delivery and how it pertains to product design, this was the hardest problem to quantify.

According to the product specification (Appendix 13), the L298N Module H Bridge Drivers should have comfortably been able to handle the power delivery to the motors. They are a recommended driver for tasks involving both DC motors and stepper motors. However, when it came to the circuitry test, they could not delivery sufficient power to the motors (resulting in the shaft just vibrating in place).

Under normal circumstances, this could have been evaluated in the lab further with an oscilloscope to understand the specific power these boards were delivering. Unfortunately, this was one of the main impacts of coronavirus isolation and such analysis was impossible.

Thankfully, an alternative driver module was sourced in the DRV8825 (another recommended driver for these type of applications with stepper motors) but this could have halted the entire project development had this not been possible.

Although these were successfully integrated into the design, it was done at additional cost which went against the aim of developing a cost-effective telecine prototype.

7.2 3D Printed Components

The biggest advantage with 3D printed parts is also their potential weakness in that quality control and overall performance of the product is often less than that of a high-grade product built for purpose.

This was seen in the 3D printed stepper motor mounts, where they began to slowly bend under the weight of the motor (as well as the heat given off during use), yet also in a number of the smaller components which needed to be filed down to create a smoother surface with needle files before they could be used with the film.

Although these were sufficient for simply testing the functionality of the prototype, based on the rate of damage seen to components like the motor mounts within the timescale of the operation of the tests, it is entirely possible that complete failure of these components could have occurred.

7.3 Camera

The Raspberry Pi Camera v2 has been designed with a “general use” purpose in mind and putting it into a very tightly controlled environment both in terms of what was expected of its output and the tight projection of film onto its sensor were clearly outside of its designed specifications.

The largest issues experienced were primarily around its floating values for ISO and gain, which resulted in inconsistencies in the replication of the films’ actual luma values. Although this can be mitigated somewhat by running an array where the camera takes multiple shots and then compiles them into a single image (similar to how other professional cameras increase their dynamic range), the Raspberry Pi camera speed then is greatly slowed. This is due to the camera needing to utilise more of the GPU processor when capturing and compiling this additional data.

Although this allows for improved and more accurate data capture, the main downside of using these RAW capture modes (YUV or RGB) is they need a significant amount of CPU performance to utilise compared to the compressed JPEG format. With the additional functionality of automatically syncing the images to Google Drive, these large files would be slower to transfer also, resulting in a longer delay for starting post-production work.

Finally, as with the power delivery issue, a reasonable option here would be to manually configure the camera luma and RGB output through more in-depth analysis of its output than what is currently possible with at home equipment. Being able to more tightly analyse the camera in a lab setting would potentially allow the operator to ‘rack’ the camera similar to a professional broadcast camera by viewing the camera output with a waveform monitor and vectorscope before the start of the telecine process. The required values to ‘balance’ the camera can then be added into a numpty array in the camera python code to tightly define its output. Sadly, this was not possible during the later stages of testing due to coronavirus isolation.

7.3.1 Camera Lens

The above issue is enhanced by altering the Raspberry Pi Camera lens from its factory specifications. This was accomplished by breaking the glue seal on the lens and unscrewing it from its housing.

Although this allowed for a much tighter focus to the camera sensor (similar to a macro focus) the focus distance was so fine that any slight movement of the film from being in the exact middle of the shot resulted in an out of focus image.

This was not completely unexpected with the prototype, with the camera especially being pushed to its practical limits, however the issues seen with an overall ‘soft’ focus outside of the middle of the frame showed that this manual adjustment by itself resulted in a drop in focus accuracy in exchange to obtaining the closer focal distance.

7.4 Python Coding

The largest issue with the telecine prototype in this respect is the enormous library of python code and how it functions with the Raspberry Pi 3B+. Although the level of knowledge and how it pertains to using python code improved as the project was developed, there were issues around the understanding of how to sufficiently create timings between functions, which overall resulted in a slower performing conversion rate than what would be classed as acceptable. Again, this could have been somewhat mitigated with analysis of the Raspberry PI GPIO pins with an oscilloscope to determine the optimum timing within the code, yet due to coronavirus isolation this was not possible by the time the prototype was constructed.

Whereas professional telecine systems can transfer film in real time (as well as faster in some cases), the telecine prototype in transferring frames for the sample analysis transferred at a rate of 3.6 seconds per frame, which resulted in the 40 seconds of sample footage used taking the Raspberry Pi a total of 1hr 12minutes to process.

For personal home use, this might not matter based on personal user preference and as its all automated, the whole process was a seamless “start and leave it to it” setup. However, this is notably slow when compared to the existing commercial products like the Reflecta (n.d.) which states that, “the film is scanned frame by frame at a speed of two frames per second”.

7.5 Overall Raspberry Pi Functionality

When combined with the camera and code issues, although the Raspberry Pi 3 Model B+ performed acceptably based on the current design of the telecine prototype, it is clear that if improvements were to be made, the processing speed of the 3B+ is not sufficient in speeding up the overall conversion speed.

This is very much a classic chicken and egg problem as based on the current performance of the telecine prototype the Pi is perfectly acceptable as it never died, failed or crashed at any point in the transfer process but if improvements were to be made to some of the other components, then it is likely that the Raspberry Pi 3B+ would need to be changed also.

Additionally, although the Raspberry Pi is capable of generating the video sequence from the individual captured frames using ffmpeg, as well as the user then being able to colour correct the footage with OpenShot Video Editor software, due to the slow nature of this conversion, the individual frames were transferred to a PC to handle all the post-production analysis, correction and final encoding.

These post-production tasks are CPU intensive and the Raspberry Pi (like many entry-level single board computers) struggle in this function, so this was not unexpected in regard to the prototype telecine performance. It does, however, result in a limitation of the project as you do need access to a sufficiently powered PC to perform these corrections efficiently.

8 Conclusion

This section provides further evaluation of the project, as well as a review for how it met the aim and objectives stated. It also offers recommendations on telecine film transfer and potential further study topics for improvement.

8.1 Reflection

This project explored the viability of creating a cost-effective telecine prototype. In order to implement this, an in-depth review of legacy hardware and technical standards in regard to film was undertaken and as a result a number of inexpensive consumer components as well as inexpensive 3D components were used to facilitate the creation of the prototype.

8.1.1 Meeting the Aim and Objectives

At a simple level, the overall aim of creating a cost-effective telecine prototype has been reached. Overall quality of the product output was generally acceptable and consistent with other solutions delivering a standard definition output. The only unknown within this is the exact quality of the competition products, yet this is mainly due to limiting the total budget for the project.

Finally, although a significant amount of research and product design and construction was undertaken, some of the final tests of the prototype could not be completed due to coronavirus restrictions. The prototype as it stands would be well worth re-visiting once open access to laboratory equipment was available, to allow for optimisation of the prototype.

8.1.2 Evaluation of Outcome

Overall, for an initial prototype product built on such a tight budget, the project has had an excellent outcome. It identified the cost saving benefits of ‘build-it-yourself’ products when it comes to technology, made possible thanks to advances in the likes of single board computers as well as the efficiency and reduction of costs for standardised component to create a product as complex as a telecine.

The main advantage of aiming for a PAL output based on calculations completed from researched technical limits of film was highlighted by this low-cost solution and how (unlike more modern High Definition, 4K or 8K outputs) less powerful components could be utilised.

Above all, the project took technical concepts, standards, and prior industry leading peer-reviewed research for what is considered legacy hardware and developed a brand-new cost-effective solution for what has been traditionally very expensive hardware.

8.1.3 Improvements

The first area for improvement would be in regard to the understanding and implementation of the python code in the functionality of the prototype. Although it was sufficient to at least complete the prototype as a proof of concept, if the product were ever going to move into commercial application, an overall re-design of this code by an expert in the field would be advantageous. This could allow for more advanced timings on the code, as well as more adaptive code to the dimensions of the full film reel versus the take-up reel, and the timed movement of the film, all of which would allow for more precise movement of the film while being a faster transfer in the process.

Second would be to fully test the capabilities of the Raspberry Pi GPIO pins and camera. For the testing of the GPIO pins, tighter understanding of the timings on the HIGH and LOW pulse modulation and how they perform under full system load (something which was only possible once construction was completed) would potentially speed up the entire transfer. This would be undertaken by wiring each pin individually to an oscilloscope and in order reading the square wave it can produce to provide the HIGH/LOW values. This is crucial as although each individual pin is capable of creating a 70kHz square wave within 10µs, this is only without other load on the system. As such, additional precautions were taken with the timings on the prototype with it being so heavily loaded at all times. Understanding the capabilities of the Pi in this instance would greatly improve speed and reliability.

With the camera, although there are fundamental limits on how much restoration you can perform in post-production, having a good quality starting point after full calibration of the camera when operating within the prototype system would allow for greater adjustment in post. Unfortunately, it was these further tests and analysis which were halted with the author not having access to the same hardware at home, yet it is entirely possible that there is more performance to be gained from the prototype. Additionally, with the issue of the standard lens not being fit for purpose, the addition of an additional element purely to allow for dedicated macro photography without having to ‘jury-rig’ the existing lens in the housing to get the end result. Doing so would allow for more precise macro photography without the standard camera firmware being inaccurate in its chromatic aberration corrections, providing an improved output even with the standard hardware.

With the camera and the movement of the film came the issue of consistent film tension from the moment the film is taken off the reel all the way through to it being taken up onto the take-up reel. This was outside the capabilities of this design, with a more basic functionality preferred for the concept build. However, it became clear that there was high probability of loss of film tension after the roller and sprocket motor placed after the film gate. During longer runs, this occasionally resulted in having to manually spool up the film to regain the tension. The solution with traditional telecine systems here is to have every main point of contact to the film spools be spring loaded and elasticated to the main chassis. This allows for a set system tension to be ‘dialled in’ for the film, ensuring a consistent film tension throughout the process chain.

Another improvement to be made would be to adopt the Raspberry Pi 4 into the prototype design. Since the concept and subsequent development of this prototype, the Raspberry Pi 4 has had time to mature as a platform. Additionally, with the amount of custom Python code and scripting being handled by the CPU, having a newer, more powerful CPU and additional RAM to offload data to during transfer would have a huge benefit in overall performance of the prototype, for very little cost increase. Not to mention the improved bandwidth performance for the SD card reader, allowing for improved read/write performance.

One additional improvement to be made is in regard to the chassis construction. Although the MakerBeam kit was excellent for providing multiple revisions of the design quickly, especially as it was provided at no cost, there were limitations with how you could build with it and once you have committed to a set of hardware like that you are stuck with using their proprietary components. An improvement here would be to make custom housing for the design to “hide away” a significant amount of the electrical components, as well as making the system easier to move. Again, although great for initial concept and prototype purposes, the system would benefit from being worked upon by a product engineer if it was ever going to go into commercial construction.

Finally, improvements to the power delivery and the possible inclusion of higher torque motors. Housing the transformers safely in a junction box would allow for significantly safer operation by the user. With more powerful torque motors, you get faster and more precise individual step accuracy under load, which again allows for an increase in film transfer speed while also being more precise than the current iteration.

8.2 Recommendations

Following the research into the techniques and technology behind film transfer by a telecine system, the author of this paper would advise the following recommendations.

8.2.1 Accuracy of Film Delivery to Gate

Rather than relying on only the sprockets and rollers to move the film into position within the gate, an area of development would be to use the perforations themselves to accurately line up the Super 8mm film to the camera.

This could be accomplished by using a laser pointing system similar to that found on the larger 35mm and 70mm telecine systems to detect the start of the perforation hole by measuring the intensity of a laser at multiple points before the film gate. The laser beaming through the film stock would result in a different reading of intensity to the laser going straight through the hole itself and the timing of that change could be used to control the pulse that goes to the motors to move the film. Having multiple readings before and after where the film interacts with sprockets and/or rollers would allow you to effectively sync up the system to the film far more precisely.

8.2.2 Film Delivery Components

One of the differences with the prototype compared to traditional telecine systems noted in the improvements was the lack of spring-loaded sprockets, rollers, and film tables. Having all of the existing design adapted to allow for this would result in much smoother operation of the telecine as it results in the motors not having to work so hard to move the film due to it not having to fight against tension in the film and that which is built up by other motors in the chain being locked down when not in operation.

This feature would also have a benefit when having to deal with the issue of the disconnect between the angle of rotation required for the two spools at different points depending on how much film was present on the main film reel and take-up spool.

Additionally, as the cost of manufacture is low, changing out the plastic 3D printed parts to have them constructed out of metal would again allow for a smoother operation and as such decrease the chance of further damaging the film.

Both of these changes to the components and how they are implemented would have a positive impact on the speed and accuracy of the telecine, while limiting the potential for film damage.

8.2.3 Camera

Although the Raspberry Pi Camera v2 was excellent as a proof of concept for the project, the sensor that the module uses is small and basic in functionality when compared to modern camera hardware.

However, simply changing out the hardware is not a simple task in this instance. Principal Software Engineer at Raspberry Pi Foundation (Hughes, 2017) was asked about this and explained the huge complexities in getting cameras specifically to work.

The recommendation here is to run as much calibration on the Raspberry Pi Camera Module as is humanly possible to get an exact level of understand of its standard output once it is functioning along with the other components within the prototype.

Alternatively, if the designer is skilled enough, create their own codec for a higher quality camera.

8.2.5 Power Delivery

Due to testing the different components in order, the solution for delivering power to the prototype is more than a little over complicated as there are 4 individual transformers for each motor, as well as the power for the Raspberry Pi itself.

As the output needs are either 12V 1.5A (the motors) or the Raspberry Pi at 5V 2.5A, a better solution would be to use a single larger transformer with one mains input and multiple outputs. This would allow for greater levels of safety with the transformer being able to be sealed to prevent electric shock.

 

8.2.6 Further Study

One area of study which was unable to be implemented in the prototype due to time and complexity was the concept of using TensorFlow & TensorRT API software (AI edge detection) as a means to have the Raspberry Pi through the camera learn the correct shape of the Super 8mm frame through machine learning.

One potential solution to both the camera issue and CPU performance for post-production would be to network the Raspberry Pi with an additional computer. This would potentially allow for not only a significantly more powerful camera to be controlled over the network by the Pi but would also offload the post-production functions to another CPU to run them in real time with the telecine code on the Pi. Although the author is aware of network Pi camera capabilities, these were not explored due to potential complexity of the build versus the single Raspberry Pi implementation.

One thing which the author has hardly scratched the surface of is the level of complexity possible with Python code. This would need to be a major area of further study before looking to improve the project further. This would include everything from precise timing methods, improved motor code design and operation, and custom built camera controls.

Appendices

1. Supporting Images from Swinson, P.R. (1995)

Notes: the images below were used as the basis for the calculations for optimum Super8mm film scanning for the prototype. Although useful to provide additional context, they only serve to allow for easy consumption of Swinson’s calculations which are presented in the main text.

For Super 8 film, the following standard measurements can be stated:

 

5.79mm x 4.01mm (visible frame)

7.90mm x 4.01mm (full scanned frame)

1 pixel = 0.264583 mm

For these measurements, a calculation for the lines per mm is as follows:

Visible = (405.3 x 280.7) = 113767.71 lines

Full = (553 x 280.7) = 155227.1 lines

These lines can then be converted into pixels:

Visible = 429,988 or 0.429

Full = 586,685 or 0.586

With the number of pixels, you can take the approx. aspect ratio of the full scanned frame at 1.97 which will give an ideal image dimension of 1075 x 546.

2. ASDA Photo Cine Film to DVD and USB Pricing Chart

Notes: as mentioned in section 4.1.1 the ASDA pricing chart at time of writing. No change for larger orders, as what is usually typical with commercial products from a supermarket especially. Although no time scale is provided on the page, the delivery time is estimated at 3 weeks when you receive packaging from ASDA for customer’s cine film reels to be processed.

3. Jessops Photo Cine Film Conversion product page

Notes: currently this page is only available on web archive https://web.archive.org/web/20190328083745/https://www.jessops.com/c/offers/photo/restore-revive

Although there is no way to confirm this completely, it appears the service has been halted with Jessops going into Administration.

4. Kodak Express London 8mm Cine Film Cost Chart

4. Notes: as it is the only way to reasonably compare to other services, the single 200ft reel with transfer to DVD cost was used (combined £30) to calculate the £0.15 per foot value.

5. C2DT Cost Chart

Notes: Whereas High Street Stores on the whole do not provide a bulk service, the minimum service charge here implies that the consumer will need to use this as a bulk conversion service. For the sake of comparison, the £28 value for 200ft reels was used.

6. Alive Studios Quote Form

Notes: hard to quantify if certain order sizes provide different discounts on services, however as the comparison from other services was a 200ft reel, that was used on the quote service to create a price.

7. United Nations Security Council Resolutions

Notes: provided only to allow for additional context for quote from Kim Jong Un, not directly required for project.

United Nations Security Council Resolution 1695 (2006) Letter dated 4 July 2006 from the Permanent Representative of Japan to the United Nations addressed to the President of the Security Council (S/2006/481) [online]

Available at: http://unscr.com/en/resolutions/doc/1695

[Accessed: 21st February 2020]

United Nations Security Council Resolution 1718 (2006) Non-proliferation/Democratic People’s Republic of Korea [online]

Available at: http://unscr.com/en/resolutions/doc/1718

[Accessed: 21st February 2020]

United Nations Security Council Resolution 1874 (2009) Non-proliferation/Democratic People’s Republic of Korea [online]

Available at: http://unscr.com/en/resolutions/doc/1874

[Accessed: 21st February 2020]

8. Additional Film Scanning Quality Issues

Notes: although the document by Swinson, P.R. (1995) has been used to form the basis for all major calculations, the above papers provide additional context to support the resolution decision based on the project aim.

Additionally, provides insight into common issues with aged film with telecine systems, which although possibly more pronounced with the larger film formats, is worth being aware of during the transfer process with the prototype.

Storey, R. (1985) Electronic detection and concealment of film dirt. BBC Research Department Report. BBC RD 1985/4

Available at: http://downloads.bbc.co.uk/rd/pubs/reports/1985-04.pdf

[Accessed: 20th October 2019]

Wood, C. Taylor, EW. and Griffiths, F (1966) Colour errors in the telecine reproduction of technicolor film. BBC Research Department Report. BBC RD 1966/63

Available at: http://downloads.bbc.co.uk/rd/pubs/reports/1966-63.pdf

[Accessed: 20th October 2019]

G. T. Keene and J. D. Clifford, “Commercial Systems for Making 8mm Prints,” in Journal of the SMPTE, vol. 71, no. 6, pp. 447-449, June 1962.

Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7257628&isnumber=7257619

[Accessed: 20th October 2019]

9. Table 1 – Single board computer specification comparison data sheets

Notes: Table 1 was compiled for quick review of the key specifications for the single board computer solutions. The full data sheets are provided here for additional data.

Sources for specifications as listed in Table 1:

Raspberry Pi Foundation (n.d.) Raspberry Pi Zero W [online]

Available at: https://www.raspberrypi.org/products/raspberry-pi-zero-w/

[Accessed: 6th October 2019]

Pine64 (n.d.) Pine64 SoC and Memory Specification [online]

Available at: https://wiki.pine64.org/index.php/Pine64#SoC_and_Memory_Specification

[Accessed: 6th October 2019]

Odroid (n.d.) Odroid C1+ [online]

Available at: https://www.odroid.co.uk/odroid-c1-plus-motherboard

[Accessed: 6th October 2019]

Raspberry Pi Foundation (n.d.) Raspberry Pi 3 Model B+ [online]

Available at: https://www.raspberrypi.org/products/raspberry-pi-3-model-b-plus/

[Accessed: 6th October 2019]

Pine64 (n.d.) Rock64 [online]

Available at: https://www.pine64.org/devices/single-board-computers/rock64/

[Accessed: 6th October 2019]

ASUS (n.d.) Tinker Board [online]

Available at: https://www.asus.com/uk/Single-Board-Computer/Tinker-Board/specifications/

[Accessed: 6th October 2019]

Pine64 (n.d.) PINE H64 Ver. B [online]

Available at: https://www.pine64.org/pine-h64-ver-b/

[Accessed: 6th October 2019]

Odroid (n.d.) Odroid C2 [online]

Available at: https://www.odroid.co.uk/hardkernel-odroid-c2-board

[Accessed: 6th October 2019]

CubieTech (n.d.) Cubieboard 6 [online]

Available at: http://docs.cubieboard.org/products/start

[Accessed: 6th October 2019]

10. Microsoft Insider Preview Builds

Notes: These builds can only be seen if you have access to the Microsoft Insider Program. Although registration is usually an additional step, LJMU students can access this service using their LJMU account login, so this detail was added to the main document for completion (despite not using this for chosen OS).

11. MASTER LED ExpertColor LED ExpertColor 5.5-50W GU10 930 36D Data

Notes: although there are objectively better bulbs available, these bulbs were the only viable cost-effective option before stepping up to full, high-end professional industrial bulbs.

What follows is the main Photometric Data for the bulb, which is only needed for extra context on the bulb performance yet is difficult to view on the main datasheet.

What follows is the full datasheet for reference purposes:

 

 

click to enlarge

click to enlarge

click to enlarge

 

12. NEMA 17 Stepper Motor Specifications

Notes: these motors are extremely common, and datasheets are often mixed up with different models, however in this instance the motors were decided upon early in development of the prototype so they could be sourced from the production factory in China.

As such, the datasheet was able to be provided from the source:

13. L298N Dual H Bridge Stepper Motor Driver Board Specification Sheet

Notes: Although these were eventually replaced, the specification sheet is provided here mainly as an example of although suitable for the build, these boards ended up being replaced by a different driver.

Full sheet is available here: http://www.handsontec.com/dataspecs/L298N%20Motor%20Driver.pdf

[Accessed: 12th October 2019]

14. Sprocket and Rollers Design Images

Notes: As this would be the main mode of film travel, the sprocket was considered to be the key design for the telecine prototype to allow for correct and precise movement.

As the rollers were not critical (their edge just had to fit the gap in the sprocket for the Super 8mm film), they were downloaded as a pre-made 8mm sized roller, with the large open inner section being adapted by a LJMU manufacturing technician by cutting out aluminium plate to allow the roller to be mounted to a 3mm diameter screw.

15. Super 8mm Take Up Reel Design

Notes: Although film reels can still be purchased through eBay, the inexpensive versions are units that people have 3D printed themselves to sell (at which point you might as well print them yourself if you have the option) or have multiple old, questionable condition units to purchase in bulk.

 

16. Super 8mm Film table and Adapter

Notes: These were custom parts specifically created for use with motors with 5mm D-shaped shafts. It is worth pointing out that although these were designed to specification, slight discrepancy in manufacture, as well as the general age of the Super 8mm film reel purchased for sample purposes resulted in one of the adapters requiring a small amount of electrical tape to be attached to provide a slightly larger diameter.

As there was evidence that the film itself was badly worn with age, it is possible that this resulted in a slight change in dimensions to the aged plastic also. However, once the tape was applied the system worked as intended.

As mentioned in main write-up, the adapter simply slides onto the main plate to allow for both Super 8mm and standard 8mm compatibility.

17. Stepper Motor Mount Design

Notes: this was a simplistic design, yet due to time and cost constraints the model was sourced and then printed along with the other items. This design allows for the motor to be mounted using the inner drill holes, with the outer 2 on each end used to mount the entire system to the main prototype chassis.

18. Script for Raspberry Pi CPU cooling Tests

Notes:The following is the script used in the standard terminal of Raspbian OS in order to stress the CPU and measure the results:

#!/bin/bash

clear

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq

vcgencmd measure_temp

sysbench –test=cpu –cpu-max-prime-1000 –num-threads-4 run >/dev/null 2>&1

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq

vcgencmd measure_temp

sysbench –test=cpu –cpu-max-prime-25000 –num-threads-4 run >/dev/null 2>&1

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq

vcgencmd measure_temp

sysbench –test=cpu –cpu-max-prime-25000 –num-threads-4 run >/dev/null 2>&1

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq

vcgencmd measure_temp

sysbench –test=cpu –cpu-max-prime-25000 –num-threads-4 run >/dev/null 2>&1

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq

vcgencmd measure_temp

sysbench –test=cpu –cpu-max-prime-50000 –num-threads-4 run

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq

vcgencmd measure_temp

It is important to note here that this is designed in stages to slowly ramp up the stress on the Pi.

The first 1000 prime numbers test is very quick and is only designed to ensure there are no issues with the boost clock on the Pi. It is then followed by multiple larger runs. Three at 25000 prime numbers and one sustained 50000 prime numbers.

After each test, the Pi reports its clock speed and temperature instantly to allow for analysis of temperature and clock speeds over time.

Although not as complex as other modern benchmarks for other devices, this serves well in stressing the CPU to 100% utilisation and as such gives accurate temperature performance over time.

19. Script for SD Card Benchmark

Notes: the command line used goes to the hosted site for this script. It is provided here just for reference and backup if the main online host is unavailable during review.

#!/bin/bash

DATAMB=${1:-512}

FILENM=~/test.dat

[ -f /flash/config.txt ] && CONFIG=/flash/config.txt || CONFIG=/boot/config.txt

trap “rm -f ${FILENM}” EXIT

[ “$(whoami)” == “root” ] || { echo “Must be run as root!”; exit 1; }

HDCMD=”hdparm -t –direct /dev/mmcblk0 | grep Timing”

WRCMD=”rm -f ${FILENM} && sync && dd if=/dev/zero of=${FILENM} bs=1M count=${DATAMB} conv=fsync 2>&1 | grep -v records”

RDCMD=”echo 3 > /proc/sys/vm/drop_caches && sync && dd if=${FILENM} of=/dev/null bs=1M 2>&1 | grep -v records”

grep OpenELEC /etc/os-release >/dev/null && DDTIME=5 || DDTIME=6

getperfmbs()

{

local cmd=”${1}” fcount=”${2}” ftime=”${3}” bormb=”${4}”

local result count _time perf

result=”$(eval “${cmd}”)”

count=”$(echo “${result}” | awk “{print \$${fcount}}”)”

_time=”$(echo “${result}” | awk “{print \$${ftime}}”)”

if [ “${bormb}” == “MB” ]; then

perf=”$(echo “${count}” “${_time}” | awk ‘{printf(“%0.2f”, $1/$2)}’)”

else

perf=”$(echo “${count}” “${_time}” | awk ‘{printf(“%0.2f”, $1/$2/1024/1024)}’)”

fi

echo “${perf}”

echo “${result}” >&2

}

getavgmbs()

{

echo “${1} ${2} ${3}” | awk ‘{r=($1 + $2 + $3)/3.0; printf(“%0.2f MB/s”,r)}’

}

systemctl stop kodi 2>/dev/null

clear

sync

[ -f /sys/kernel/debug/mmc0/ios ] || mount -t debugfs none /sys/kernel/debug

overlay=”$(grep -E “^dtoverlay” ${CONFIG} | grep -E “mmc|sdhost”)”

clock=”$(grep “actual clock” /sys/kernel/debug/mmc0/ios 2>/dev/null | awk ‘{printf(“%0.3f MHz”, $3/1000000)}’)”

core_now=”$(vcgencmd measure_clock core | awk -F= ‘{print $2/1000000}’)”

core_max=”$(vcgencmd get_config int | grep core_freq | awk -F= ‘{print $2}’)”

turbo=”$(vcgencmd get_config int | grep force_turbo | awk -F= ‘{print $2}’)”

[ -n “${turbo}” ] || turbo=0

[ ${turbo} -eq 0 ] && turbo=”$(cat /sys/devices/system/cpu/cpufreq/ondemand/io_is_busy)”

[ -n “${core_max}” ] || core_max=”${core_now}”

echo “CONFIG: ${overlay}”

echo “CLOCK : ${clock}”

echo “CORE : ${core_max} MHz, turbo=${turbo}”

echo “DATA : ${DATAMB} MB, ${FILENM}”

echo

echo “HDPARM:”

echo “======”

HD1=”$(getperfmbs “${HDCMD}” 5 8 MB)”

HD2=”$(getperfmbs “${HDCMD}” 5 8 MB)”

HD3=”$(getperfmbs “${HDCMD}” 5 8 MB)”

HAD=”$(getavgmbs “${HD1}” “${HD2}” “${HD3}”)”

echo

echo “WRITE:”

echo “=====”

WR1=”$(getperfmbs “${WRCMD}” 1 ${DDTIME} B)”

WR2=”$(getperfmbs “${WRCMD}” 1 ${DDTIME} B)”

WR3=”$(getperfmbs “${WRCMD}” 1 ${DDTIME} B)”

WRA=”$(getavgmbs “${WR1}” “${WR2}” “${WR3}”)”

echo

echo “READ:”

echo “====”

RD1=”$(getperfmbs “${RDCMD}” 1 ${DDTIME} B)”

RD2=”$(getperfmbs “${RDCMD}” 1 ${DDTIME} B)”

RD3=”$(getperfmbs “${RDCMD}” 1 ${DDTIME} B)”

RDA=”$(getavgmbs “${RD1}” “${RD2}” “${RD3}”)”

echo

echo “RESULT (AVG):”

echo “============”

printf “%-33s core_freq turbo overclock_50 WRITE READ HDPARM\n” “Overlay config”

printf “%-33s %d %d %11s %10s %10s %10s\n” “${overlay}” “${core_max}” “${turbo}” “${clock}” “${WRA}” “${RDA}” “${HAD}”

20. Camera Mount and Film Gate/Plate

Notes: although the camera mount design was for the v1 Raspberry Pi camera, the v2 camera uses the same base board and dimensions, with only a change to the sensor, making it still compatible for this design.

Additionally, to reduce costs and complexity, due to the film gate and plate being designed off an existing system, those models were utilised for the project.

Camera Mount Model

Film Gate and Securing Plate

21. Raspberry Pi Camera GUI Software

Notes: as mentioned in the main document, this was not used for the main prototype. Placing this to give context to the level of complexity that can be created within Python code by designers.

22. Google Drive Authentication Information

Notes: As mentioned in the main document, there are a number of additional pieces of data needed to use this service, which revolve around an understanding of the Google Developers API and Services. Although not needed for understanding of the project, breakdowns of these tools are shown here for reference.

Section 1 – Allowing Access to Google Drive

The developer console can be found at: https://console.developers.google.com/ where you need to login with a valid Google account. If you are a new user to these services, you will need to go through a verification process.

From here, a new project will need to be made. This will act as a name for any access to Google API and Services needed for the cloud sync.

Once the project is created, access the API Library from the main menu and search for Google Drive API.

Click onto that API and click to enable it. This allows your project to specifically use this API.

Within the project, an application is needed to allow for a Client ID, secret answer, and verification code to be generated.

In this image, although the detail is shown, it has been blanked out as this data can be used to directly access a Google Drive without the need for the users Google ID (user security critical information).

Once this is created, back on the Raspberry Pi, the configuration of the software can begin by typing the following command into the console:

rclone config

Which begins the process of customising the setup using the client ID and secret answer.

Once you do this, a number of options appear where the user needs to select what cloud service is being installed (Google Drive). It is important to not set this incorrectly as it will not correspond with the correct Google API enabled.

At this point, the configuration will generate a link to use on the Raspberry Pi, which allows the user to login to Google with their ID to provide the software verified access using the client ID and secret answer.

Section 2 – Automatic Mount of Google Drive on Boot

Although mounting the Google Drive is useful, having the option for the drive to mount automatically is preferred as it ensures the functionality is there for the telecine regardless on if the user remembers to mount the drive or not.

This is accomplished through a linux command service called systemd/User which more detail can be found about here: https://wiki.archlinux.org/index.php/Systemd/User

This is a configuration file which is checked during Raspbian OS boot and its path is at:

Pi/Home/etc/systemd/user

With the terminal sudo nano command, a rclone@.service file is created with this information:

[Unit]

Description=rclone: Remote FUSE filesystem for cloud storage config %i

Documentation=man:rclone(1)

[Service]

Type=notify

ExecStartPre=/bin/mkdir -p %h/mnt/%i

ExecStart= \

/usr/bin/rclone mount \

–fast-list \

–vfs-cache-mode writes \

–vfs-cache-max-size 100M \

%i: %h/mnt/%i

[Install]

WantedBy=default.target

Once this has been saved, systemd needs to be updated to see this change an enable the file with the following commands:

systemctl –user enable rclone@gdrive

systemctl –user start rclone@gdrive

This results in whenever a user logs into the Raspberry Pi, the mounting process for the Google Drive is enabled and access is granted automatically.

23. Telecine Prototype Main Chassis Images

Notes: Although minor adjustments were continued throughout the build process, the overall design remained the same at this point in the process. The following images were taken after initial tests were completed of the prototype.

24. Installation of the DRV8825 Stepper Motor Drivers

Notes: due to the amount of cabling involved in running four stepper motors, each with a driver, a decision was made to utilise a breadboard in the wiring process. This allowed the driver modules to ‘share’ the 3.3V power hold to the motors, as the main control would be handled by other GPIO pins.

This results in a single driver cabling placement as follows:

Then with all the driver modules cabled, before being mounted to the chassis:

The 12V transformer is cabled using a standard 3-pin UK plug (3-wire core), with a ground and a live wire coming out for each of the driver inputs as follows.

25. Additional Motor Control Code

Notes: Due to the variation in code design for the different motors, their code is listed here for the sake of completion of data.

rollers_in.py

This was motor code running the sprocket before the camera and film gate. Only differences of note here are the GPIO pin numbers and with this being a sprocket moving film over a film roller, to give it more smooth motion with only needing to move the film along it is running 1/8th micro-steps.

from time import sleep

import RPi.GPIO as GPIO

DIR = 19 # Direction GPIO Pin

STEP = 26 # Step GPIO Pin

CW = 1 # Clockwise Rotation

CCW = 0 # Counterclockwise Rotation

SPR = 200 # Steps per Revolution (360 / 1.8)

GPIO.setmode(GPIO.BCM)

GPIO.setup(DIR, GPIO.OUT)

GPIO.setup(STEP, GPIO.OUT)

GPIO.output(DIR, CCW)

MODE = (5, 6, 13) # Microstep Resolution GPIO Pins

GPIO.setup(MODE, GPIO.OUT)

RESOLUTION = {‘Full’: (0, 0, 0),

‘Half’: (1, 0, 0),

‘1/4’: (0, 1, 0),

‘1/8’: (1, 1, 0),

‘1/16’: (0, 0, 1),

‘1/32’: (1, 0, 1)}

GPIO.output(MODE, RESOLUTION[‘1/8’])

step_count = 10 * 8

delay = .500 / 8

for x in range(step_count):

GPIO.output(STEP, GPIO.HIGH)

sleep(delay)

GPIO.output(STEP, GPIO.LOW)

sleep(delay)

GPIO.cleanup()

rollers_out.py

One additional point of note here is to show what can happen when you get the wire pairs in the wrong order on the driver module.

Both roller motors are meant to turn clockwise. However, if you get you paired wires from the motor in the wrong order then you have to flip the rotation around in the code.

A nice trick with the stepper motors is creating a circuit with 2 of the 4 wires with a small LED and then turning the motor shaft. If they are paired in the motor, the LED will light up with you turning it. However, due to having no way at home to detect which of the paired leads are positive and negative, this issue can arise. Being aware of this, as well as testing the motors individually can allow for adjustment in the code to resolve the problem.

from time import sleep

import RPi.GPIO as GPIO

DIR = 23 # Direction GPIO Pin

STEP = 24 # Step GPIO Pin

CW = 1 # Clockwise Rotation

CCW = 0 # Counterclockwise Rotation

SPR = 200 # Steps per Revolution (360 / 1.8)

GPIO.setmode(GPIO.BCM)

GPIO.setup(DIR, GPIO.OUT)

GPIO.setup(STEP, GPIO.OUT)

GPIO.output(DIR, CCW)

MODE = (10, 9, 11) # Microstep Resolution GPIO Pins

GPIO.setup(MODE, GPIO.OUT)

RESOLUTION = {‘Full’: (0, 0, 0),

‘Half’: (1, 0, 0),

‘1/4’: (0, 1, 0),

‘1/8’: (1, 1, 0),

‘1/16’: (0, 0, 1),

‘1/32’: (1, 0, 1)}

GPIO.output(MODE, RESOLUTION[‘1/8’])

step_count = 10 * 8

delay = .500 / 8

for x in range(step_count):

GPIO.output(STEP, GPIO.HIGH)

sleep(delay)

GPIO.output(STEP, GPIO.LOW)

sleep(delay)

GPIO.cleanup()

spool_up.py

from time import sleep

import RPi.GPIO as GPIO

DIR = 2 # Direction GPIO Pin

STEP = 3 # Step GPIO Pin

CW = 1 # Clockwise Rotation

CCW = 0 # Counterclockwise Rotation

SPR = 200 # Steps per Revolution (360 / 1.8)

GPIO.setmode(GPIO.BCM)

GPIO.setup(DIR, GPIO.OUT)

GPIO.setup(STEP, GPIO.OUT)

GPIO.output(DIR, CW)

MODE = (17, 27, 22) # Microstep Resolution GPIO Pins

GPIO.setup(MODE, GPIO.OUT)

RESOLUTION = {‘Full’: (0, 0, 0),

‘Half’: (1, 0, 0),

‘1/4’: (0, 1, 0),

‘1/8’: (1, 1, 0),

‘1/16’: (0, 0, 1),

‘1/32’: (1, 0, 1)}

GPIO.output(MODE, RESOLUTION[‘Full’])

step_count = 10

delay = .500

for x in range(step_count):

GPIO.output(STEP, GPIO.HIGH)

sleep(delay)

GPIO.output(STEP, GPIO.LOW)

sleep(delay)

GPIO.cleanup()

26. Contents of MakerBeam Regular Black Starter Kit

Notes: as mentioned in acknowledgements, this kit was provided at no cost. Normal retail cost is £75 (€85.50) as listed by MakerBeam (n.d.)

Kit comprised of the following parts:

50 aluminum MakerBeam profiles black anodised:

4x300mm

8x200mm

6x150mm

16x100mm

8x60mm

8x40mm

60 stainless steel brackets:

12 x 90 degree

12 x 60 degree

12 x 45 degree

12 x corner brackets

12 x 90 degree right angle

1 bag of 6mm square headed MakerBeam bolts with hex hole (M3), 1 bag of nuts (M3) and 1 hex nut driver.

27. Original Project Timetable

Notes: Although the timetable for the project was severely impacted by the outbreak of Coronavirus, it is being kept here for reference to the original timescale, especially due to the majority of it being completed to the original deadline dates before the 3 week deadline changes.

References

Alexamder (2015) rpitelecine [online]

Available at: https://github.com/Alexamder/rpitelecine

[Accessed: 14th October 2019]

Alibaba (n.d.) smps 110v 220v DC AC 12v 5a switch power supply 60w power transformers [online]

Available at: https://www.alibaba.com/product-detail/smps-110v-220v-DC-AC-12v_62332440061.html?spm=a2700.7724857.normalList.63.5c5552eavNMxiL

[Accessed: 25th October 2019]

Alive Studios (n.d.) Cine Film Calculator [online]

Available at: https://admin.alivestudios.co.uk/cinefilm-form-p1

[Accessed: 6th October 2019]

Aokin (n.d. a) For Raspberry Pi 3 Model B Heatsink Cooling with 2 Pieces Pure Copper + 1 Piece Aluminum Heat Sink [online]

Available at: https://www.aliexpress.com/item/4000200307046.html

[Accessed: 8th October 2019]

Aokin (n.d. b) Aokin Raspberry Pi 4 Model B Dual Fan with Heatsink Ultimate Cooling Fan Cooler Optional for Raspberry Pi 3/3B+/4B [online]

Available at: https://www.aliexpress.com/item/4000401940976.html

[Accessed: 8th October 2019]

ASDA (n.d.) Cine Film To DVD/USB [online]

Available at: https://www.asda-photo.co.uk/category/477-cine-film-to-dvdusb

[Accessed: 6th October 2019]

Billwilliams1952 (2016) PiCameraApp [online]

Available at: https://github.com/Billwilliams1952/PiCameraApp

[Accessed: 28th October 2019]

Cappel, W. “The Possibilities and Advantages of 8mm Film in the Educational Field,” Proceedings of the Symposium on Super 8 Film Production Techniques, Los Angeles, CA, USA, 1969, pp. 93-95.

Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7261468&isnumber=7261455

[Accessed: 20th November]

CPC (n.d.) 8MP Raspberry Pi Camera Board [online]

Available at: https://cpc.farnell.com/raspberry-pi/rpi-8mp-camera-board/raspberry-pi-camera-board-8mp/dp/SC14028#

[Accessed: 1st November 2019]

C2DT (n.d.) START YOUR ORDER [online]

Available at: https://www.cine2dvdtransfers.co.uk/ordering-step-1

[Accessed: 6th October 2019]

Craig-Wood (2014) Rclone – rsync for cloud storage [online]

Available at: https://rclone.org/

[Accessed: 2nd December 2019]

Cunningham, C (2020a) original sequence [online video]

Available at: https://www.youtube.com/watch?v=_TcPWf9alyc

[Accessed: 22nd April 2020]

Cunningham, C (2020b) Stability Pass [online video]

Available at: https://www.youtube.com/watch?v=VHGuHQKUhjc

[Accessed: 22nd April 2020]

Cunningham, C (2020c) Telecine Stable Sequence [online video]

Available at: http://chris-cunningham.co.uk/videos/Tele_Stable_Sequence.mov

[Accessed: 22nd April 2020]

DietPi (n.d.) DietPi – features overview [online]

Available at: https://dietpi.com/#features

[Accessed: 30th March 2020]

Disney (2020) The Jungle Book (1967) [online video]

Available at: https://www.disneyplus.com/en-gb/movies/the-jungle-book-1967/5trzAb4Rz3F9

[Accessed: 22nd April 2020 – paywall content]

DSIDA (n.d.) QC-3FF-S-Z Relay 5V low level trigger One 1 Channel Relay Module interface Board Shield For PIC AVR DSP ARM MCU Arduino [online]

Available at: https://www.aliexpress.com/item/32933351367.html

[Accessed: 1st October 2019]

Electronics Hub (2018) Controlling a Stepper Motor with Raspberry Pi and L298N [online video]

Available at: https://www.youtube.com/watch?v=-qupzr_XL_U

[Accessed: 10th October 2019]

etiennecollomb (2018) Super-8-Raspberry-Scan [online]

Available at: https://github.com/etiennecollomb/Super-8-Raspberry-Scan

[Accessed: 14th October 2019]

Geerling (n.d.) Rasbperry Pi 3 model B+ [online]

Available at: https://www.pidramble.com/wiki/benchmarks/microsd-cards#3-model-b-plus

[Accessed: 7th October 2019]

Geerling (2016) How to overclock the microSD card reader on a Raspberry Pi 3 [online]

Available at: https://www.jeffgeerling.com/blog/2016/how-overclock-microsd-card-reader-on-raspberry-pi-3

[Accessed: 7th October 2019]

Google (n.d.) The Chromium Projects: Chromium OS [online]

Available at: https://www.chromium.org/chromium-os

[Accessed: 30th March 2020]

Hughes, J (2017) Re: V3 camera in the works? [online]

Available at: https://www.raspberrypi.org/forums/viewtopic.php?p=1108491&sid=967855361394f4cf6b42d50264b0ef93#p1108491

[Accessed: 23rd April 2020]

IMAX (n.d.) About Us [online]

Available at: https://imaxmelbourne.com.au/about_imax

[Accessed: 3rd April 2020]

Jessops (n.d.) Cine Film Conversion [online]

Available at: https://www.jessops.com/c/offers/photo/restore-revive

[Accessed: 14th October 2019]

jphfilm (2018) rpi-film-capture [online]

Available at: https://github.com/jphfilm/rpi-film-capture

[Accessed: 9th October 2019]

Kinograph (n.d.) Open-Source Film Digitization [online]

Available at: https://www.kinograph.cc/

[Accessed: 14th October 2019]

Knight, Ray E., 1968. COLOUR TEMPERATURE: With Reference to Colour Film for Television. British Kinematography, Sound & Television, 50(3), pp.62–77.

Kodak (n.d.) SPECS KODAK SCANZA Digital Film Scanner [online]

Available at: https://www.kodak.com/GB/en/consumer/product/product_specs/?contentid=4295009392&taxid=4294971108

[Accessed: 30th March 2020]

KODAK.COM (2017) KODAK Super 8 Camera [online]

Available at: https://www.kodak.com/GB/en/Consumer/Products/Super8/Super8-camera/default.htm

[Accessed: 7th October 2019]

Kodak Express London (n.d.) Super 8 & 8mm Cine from £6.50! Copying Cine Film to DVD and MOV Digital Files [online]

Available at: http://www.kodakexpresscamden.com/Cine-Film-8mm-Copy.html

[Accessed: 6th October 2019]

Lamps & Tubes (n.d.) 15 kW xenon arc lamp for IMAX projection system IMAX-Kino Projektionslampe (Xenon Kurzbogenlampe) Lampe au xénon à arc court pour projecteur IMAX [online]

Available at: http://lampes-et-tubes.info/alxe/al020.php?l=e

[Accessed: 3rd April 2020]

Last Minute Engineers (n.d.) Interface L298N DC Motor Driver Module with Arduino [online]

Available at: https://lastminuteengineers.com/l298n-dc-stepper-driver-arduino-tutorial/ [Accessed: 10th October 2019]

Linux Org (2017) Download Linux [online]

Available at: https://www.linux.org/pages/download/

[Accessed: 30th March 2020]

Long,S. (2016) Introducing PIXEL [online]

Available at: https://www.raspberrypi.org/downloads/raspbian/

[Accessed: 30th March 2020]

MakerBeam (n.d.) Black Starter Kit Regular MakerBeam [online]

Available at: https://www.makerbeam.com/makerbeam-makerbeam-regular-starter-kit-black.html

[Accessed: 27th January 2020]

Marshal Kim Jong Un (2013) New Year Address [online]

Available at: http://onecoreanetwork.blogspot.com/2013/01/new-year-address-january-1-juche-102.html

[Accessed: 21st February 2020]

MATT (2012) Simple Guide to the Raspberry Pi GPIO Header [online]

Available at: https://www.raspberrypi-spy.co.uk/2012/06/simple-guide-to-the-rpi-gpio-header-and-pins/

[Accessed: 2nd December 2019]

METS (n.d.) Making the Film Gate [online]

Available at: http://www.mets-telecinesystem.co.uk/index.php/how-its-made/making-the-film-gate

[Accessed: 28th October 2019]

Microsoft (2018) An overview of Windows 10 IoT Core [online]

Available at: https://docs.microsoft.com/en-gb/windows/iot-core/windows-iot-core

[Accessed: 30th March 2020]

Microsoft (n.d.) Windows Insider Preview Downloads [online]

Available at: https://www.microsoft.com/en-us/software-download/windowsiot

[Accessed: 30th March 2020]

Nanotec (n.d.) STEPPER MOTOR ANIMATION [online]

Available at: https://en.nanotec.com/typo3conf/ext/nanotec/Resources/Public/Animation/StepperMotor/En/motor_micro.html

[Accessed: 3rd February 2019]

Noctua (n.d.) Noctua NF-A4x20 5V, Premium Quiet Fan, 3-Pin, 5V Version (40x20mm, Brown) [online]

Available at: https://www.amazon.co.uk/Noctua-NF-A4x10-5V-3-Pin-Premium/dp/B071W6JZV8/

[Accessed: 30th March 2020]

Philips (2020) MASTER LED ExpertColor LED ExpertColor 5.5-50W GU10 930 36D [online]

Available at: https://www.assets.signify.com/is/content/PhilipsLighting/fp929001347402-pss-en_gb

[Accessed: 24th March 2020]

Picamera (n.d.) picamera [online]

Available at: https://picamera.readthedocs.io/en/release-1.13/index.html

[Accessed: 28th October 2019]

Pololu (n.d.) DRV8825 Stepper Motor Driver Carrier, High Current [online]

Available at: https://www.pololu.com/product/2133/specs

[Accessed: 20th March 2020]

Raspberry Pi Community Board (2016) L298N Dual H Bridge Stepper Motor Driver [online]

Available at: https://www.raspberrypi.org/forums/viewtopic.php?t=135599

[Accessed: 12th October 2019]

Raspberry Pi Foundation (n.d.) Raspbian [online]

Available at: https://www.raspberrypi.org/downloads/raspbian/

[Accessed: 30th March 2020]

Reflecta (n.d.) Reflecta film scanner super 8/normal 8 [online]

Available at: https://www.amazon.co.uk/Reflecta-film-scanner-super-normal/dp/B01MYE5KPS/

[Accessed: 30th March 2020]

Riemersma, T. (2019) Candela, Lumen, Lux: the equations [online]

Available at: https://www.compuphase.com/electronics/candela_lumen.htm

[Accessed: 3rd April 2020]

SMPTE ST 37 (1994), SMPTE Standard – For Motion-Picture Equipment — Raw Stock Cores[online]

Available at: https://ieeexplore.ieee.org/servlet/opac?punumber=7290313

[Accessed: 7th October 2019]

Stackoverflow (2017), Running multiple Python scripts simultaneously and then sequentially[online]

Available at: https://stackoverflow.com/questions/42072715/running-multiple-python-scripts-simultaneously-and-then-sequentially

[Accessed: 10th February 2020]

Swinson, P.R. (1995) Kloning the film image for digital high resolution systems. IBC 95 International Broadcasting Convention, Amsterdam, Netherlands, 1995 [online] pp. 530-536.

Available at: https://ieeexplore.ieee.org/document/475461

[Accessed: 10th October 2019]

SysBench (2007) System performance benchmark [online]

Available at: http://web.archive.org/web/20070318025604/http://sourceforge.net/projects/sysbench [Accessed: 1st October 2019]

They Shall Not Grow Old 2018 [film] Directed by Peter Jackson. USA; Warner Bros. Entertainment Inc. (99 mins)

Toggio (2016) MINIBIAN image for Raspberry Pi [online]

Available at: https://sourceforge.net/projects/minibian/

[Accessed: 30th March 2020]

Tronixlabs (n.d.) Control DC and Stepper Motors With L298N Dual Motor Controller Modules and Arduino [online]

Available at: https://www.instructables.com/id/Control-DC-and-stepper-motors-with-L298N-Dual-Moto/

[Accessed: 12th October 2019]

Vertex Video (n.d.) Calibration Charts [online]

Available at: http://accu-chart.com/SD-Chart%20Sets-Standard-Definition-Calibration-Test-Charts.asp

[Accessed: 2nd October 2019]

White, D.R. (1962) 8mm and New Small-Format Film Systems: From the SMPTE Engineering Vice-President. Journal of the SMPTE, vol. 71, no. 8, Aug 1962 [online] pp. 555-555.

Available at: https://ieeexplore.ieee.org/document/7258028

[Accessed: 20th November 2019]

Winait (2019) Winait 5″&3″ Reel 8mm Roll Film & Super8 Roll Film Digital Film Video Scanner [online]

Available at: https://web.archive.org/web/20191006130211/https:/www.amazon.co.uk/Winait-Super8-Digital-Video-Scanner/dp/B07142WSCS/

[Accessed: 30th March 2020]

Wolverine (n.d.) Wolverine 8mm and Super8 Reels Movie Digitizer with 2.4″ LCD, Black (Film2Digital MovieMaker) [online]

Available at: https://www.amazon.co.uk/Wolverine-Super8-Digitizer-Film2Digital-MovieMaker/dp/B01KA32HH0/

[Accessed: 30th March 2020]

Zavada, R.J. (1970) The Standardization of the Super-8 System, Journal of the SMPTE [online] vol. 79, no. 6, pp. 536-541

Available at: https://ieeexplore.ieee.org/document/7227171

[Accessed: 20th October 2019]

4KMEDIA.ORG (2019), Real Or Fake 4K [online]

Available at: https://4kmedia.org/real-or-fake-4k/

[Accessed: 20th November 2019]

Standards and Recommended Practices

IET BS 7671 Requirements for Electrical Installations [online]

Available at: https://electrical.theiet.org/bs-7671/

[Accessed: 3rd February 2020]

ITU-R BT.601 Studio encoding parameters of digital television for standard 4:3 and wide-screen 16:9 aspect ratios [online]

Available at: https://www.itu.int/rec/R-REC-BT.601/

[Accessed: 7th October 2019]

ITU-R BT.1700 Characteristics of composite video signals for conventional analogue television systems [online]

Available at: https://www.itu.int/rec/R-REC-BT.1700

[Accessed: 7th October 2019]

SMPTE RP 12:1997 – Recommended Practice – Screen Luminance for Drive-In Theaters [online]

Available at: https://ieeexplore.ieee.org/servlet/opac?punumber=7290382

[Accessed: 20th October 2019]

SMPTE RP 55:1997 – Recommended Practice – 8-mm Type S Sprocket Design [online]

Available at: https://ieeexplore.ieee.org/servlet/opac?punumber=7291264

[Accessed: 13th October 2019]

SMPTE RP 98:1995 – Recommended Practice – Measurement of Screen Luminance in Theaters [online]

Available at: https://ieeexplore.ieee.org/servlet/opac?punumber=7289743

[Accessed: 20th October 2019]

SMPTE ST 37:1994 For Motion-Picture Film — Raw Stock Cores [online]

Available at: https://ieeexplore.ieee.org/servlet/opac?punumber=7290313

[Accessed: 20th October 2019]

SMPTE ST 75:1994 For Motion-Picture Film — Raw Stock — Designation of A and B Windings [online]

Available at: https://ieeexplore.ieee.org/servlet/opac?punumber=7290106

[Accessed: 20th October 2019]

SMPTE ST 154:2003 For Motion-Picture Film (8-mm Type S) — Projectable Image Area and Projector Usage [online]

Available at: https://ieeexplore.ieee.org/servlet/opac?punumber=7291483

[Accessed: 14th November 2019]

SMPTE ST 184:1998 For Motion-Picture Film — Raw Stock Identification and Labeling [online]

Available at: https://ieeexplore.ieee.org/servlet/opac?punumber=7290931

[Accessed: 20th October 2019]

SMPTE ST 196:2003 For Motion-Picture Film — Indoor Theater and Review Room Projection — Screen Luminance and Viewing Conditions [online]

Available at: https://ieeexplore.ieee.org/servlet/opac?punumber=7290988

[Accessed: 20th October 2019]

SMPTE ST 212:1995 Motion-Picture Equipment (8-mm Type S) — Projection Reels — 75-mm Diameter [online]

Available at: https://ieeexplore.ieee.org/servlet/opac?punumber=7292044

[Accessed: 15th October 2019]

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.