First cycle
degree courses
Second cycle
degree courses
Single cycle
degree courses
School of Engineering
Course unit
INP9087858, A.A. 2019/20

Information concerning the students who enrolled in A.Y. 2019/20

Information on the course unit
Degree course Second cycle degree in
IN2371, Degree course structure A.Y. 2019/20, A.Y. 2019/20
bring this page
with you
Degree course track INTERNATIONAL MOBILITY [005PD]
Number of ECTS credits allocated 6.0
Type of assessment Mark
Course unit English denomination 3D AUGMENTED REALITY
Department of reference Department of Information Engineering
Mandatory attendance No
Language of instruction English
Single Course unit The Course unit can be attended under the option Single Course unit attendance
Optional Course unit The Course unit can be chosen as Optional Course unit

Teacher in charge SIMONE MILANI ING-INF/03

Course unit code Course unit name Teacher in charge Degree course code

ECTS: details
Type Scientific-Disciplinary Sector Credits allocated
Core courses ING-INF/03 Telecommunications 6.0

Course unit organization
Period First semester
Year 1st Year
Teaching method frontal

Type of hours Credits Teaching
Hours of
Individual study
Lecture 6.0 48 102.0 No turn

Start of activities 30/09/2019
End of activities 18/01/2020
Show course schedule 2019/20 Reg.2019 course timetable

Examination board
Board From To Members of the board
1 A.A. 2019/2020 01/10/2019 15/03/2021 MILANI SIMONE (Presidente)
ZANUTTIGH PIETRO (Membro Effettivo)

Prerequisites: In order to attend the course, students must posses a basic knowledge in Calculus and Linear Algebra (including basic matrix operations, inversion and diagonalization).
Attendees will have the opportunity of testing their preliminary knowledge with an online test. Basic knowledge of Matlab software is required as well.

Some preliminary knowledge on image acquisition, camera calibration, computation of local descriptors, supervised classification of data arrays can be useful (although not strictly necessary). Such topics are described in detail within the Computer Vision and Machine Learning courses.
Target skills and knowledge: The course is structured in order to provide students with both theoretical and practical knowledge on the topics of virtual, augmented and mixed reality; moreover, it will give a clear sense for the deep interconnections between computer vision and computer graphics.

In its various articulations the course is expected to provide the following knowledge and skills:
1. To be aware of the main mathematical models that rules computer vision and graphical interfaces.
2. To learn and engineer the main computer vision strategies for the acquisition of 3D models and scenes.
3. To know and be able to implement different deep learning strategies for the classification of objects, gestures, and scenes.
4. To evaluate and use different rendering algorithms depending on the application scenario.
5. To identify the most appropriate strategies for a given augmented, virtual, and mixed reality scenario.
6. To know the main scenarios of application in cross-disciplinary contexts.
7. To develop some practical programming skills in implementing AR applications.

Students will also have the opportunity to develop and use some computer vision, graphics, and augmented reality algorithms by means of lab sessions.

Within its time limits, the course also aims to introduce the students to current computer vision and computer graphics tools such as OpenCV, OpenGL, and Unity.
Examination methods: Final evaluation will be performed by means of a written exam and the development of a final project (to be document with a written report). Alternatively, the final project report can be replaced by two lab session reports on two lab experiences (chosen by the student).
Reports must be handed in at least one day before the final exam. The final score will be made of a weighted average of the evaluation of the written exam (50%) and the final project (50%).

The evaluation topics for the written exam will be clearly indicated during the course and in the course material.
Assessment criteria: The final evaluation will be determined on the knowledge level of the students regarding the topics presented during the course and his/her ability to apply some overviewed strategies (building and visualizing 3d models within typical augmented and virtual reality contexts). Such topics will be clearly indicated during the course and in the course material.

Evaluation criteria will be:
1. Completeness of the acquired knowledge.
2. Ability to analyze and implement an augmented reality application through the proposed techniques.
3. Property in the technical terminology used, both written and oral.
4. Appropriateness and effectiveness in the identification of the most suitable 3D acquisition, classification and graphical rendering strategies given the application scenario.
5. Programming abilities.
7. Quality of oral exposure.

Every efforts on the student part revealing personal involvement and special care will be recognized in terms of scores.
Course unit contents: a) Acquisition systems for images and 3D models

1) Image formation and camera model
Perspective projection, Pin-hole camera, Thin lenses, Fish-eye lenses, simplified and general camera model, camera calibration.

2) Computation of salient points and features
Harris e Stephens method, Scale Invariant Feature Transform (SIFT), salient points correspondences.

3) Homographies
Computationof homographies (DLT), homographies and object detection, applications of 2D augmented reality.

4) Stereopsys
3D Triangulation, epipolar geometry, epipolar rectification, essential matrix and factorizzation, motion and structure from calibrated homography, local correspondence methods, window correspondence methods, accuracy-reliability trade-off, occlusions, global correspondence methods.

5) 3D reconstruction from other sensors
IR Structured-light depth sensors (MS Kinect v.1), Time-of-Flight depth sensors (MS Kinect v.2), active stereo sensors, laser scanners.

6) Non-calibrated reconstruction
Fundamental matrix and its computation, projective reconstruction from 2 and N views, incremental reconstruction, Structure-from-Motion, bundle adjustment.

7) Optical flow
Motion field: computation of motion and structure.

8) Orientation methods
Quaternions, orientation 2D-2D, orientatio 3D-3D: DLT and ICP methods, orientation 3D-2D.

b) Machine learning methods for augmented reality

9) Object detection and scene understanding
Image and 3D models features, image/object classification strategies, machine learning algorithms for object classification, Support Vector Machine for image/object classification, Deep Neural Networks for image/object classification, application of Deep Learning strategies to Human-computer interfaces (HCI).

c) visualization and graphical rendering

10) 3D displays, VR visors, and augmented reality devices

11) Rendering
Projective geometry and convention, ray tracing and ray casting, the radiance (or rendering) equation and its solution, illumination, the radiance solution by local methods: Phong and Cook-Torrance models, rasterization, the OpenGL pipeline.

12) Interactive augmented reality.
Introduction to Unity. Development of simple AR applications.
Planned learning activities and teaching methods: The course offers a guided tour of the computer vision, machine learning, and computer graphics topics needed for current virtual and augmented reality applications.

The course rationale can be divided into three main parts:
a) description and modelling of imaging systems for images and 3D models acquisition;
b) classification and deep learning strategies for AR applications;
c) rendering real or virtual 3D models to standard images and 3D/AR devices.

Part a) has the objective of explaining the operation and the mathematical models of current imaging systems (e.g., video-cameras, Time of Flight systems like MS Kinect, and many more) in the language of computational photography. These systems will be analyzed focusing on their capability of reconstructing 3D models of static and dynamic scenes using standard images (with special focus on stereo and active stereo systems) and/or depth cameras. Part b) will also deal with the problem of image classification and scene understanding by means of machine learning algorithms. The objective of Part (c) is to introduce the rendering methods and their adaptation to the specific viewing devices. In this final part, the course will also introduce the problem of the interaction between real and virtual world (mixed reality, human-computer interfaces).

The topics are treated by means of frontal lectures with computational examples based on MATLAB, Open CV and Unity.
The appraisal is stimulated by lab sessions and a final project confronting the student with practical situations due to the concepts seen in class.

The theoretical part of the course will be presented in 19 frontal lectures, while 5 laboratory lectures will be given to guide students across the programming of augmented reality applications.
Additional notes about suggested reading: The study material is given by the class-notes made available before every class meeting.
The notes distill and condense various research papers and content coming from several textbooks.

The frontal teaching activities involve the use of transparencies, blackboard skecthes, as well as example programs to be tested at home. All the teaching material presented during the lectures is made available on the platform "". Students will be provided with a MATLAB license by the University of Padova for lab exercise and programming. A student license for the platform Unity will be available for free download as well.
Textbooks (and optional supplementary readings)
  • Klette, Reinhard, Concise Computer Vision. Springer London: --, 2014. Cerca nel catalogo
  • Bishop, Christopher M., Pattern recognition and machine learning. New York: Springer, --. Cerca nel catalogo
  • Forsyth, David; Ponce, Jean, Computer VisionA Modern ApproachDavid A. Forsyth, Jean Ponce. Boston: ┬ęPearson, 2012. Cerca nel catalogo
  • Szeliski, Richard, Computer visionalgorithms and applicationsRichard Szeliski. New York: Springer, 2011. Cerca nel catalogo
  • Kenichi Kanatani, Yasuyuki Sugaya, Yasushi Kanazawa, Guide to 3D Vision Computation. --: Springer International Publishing, 2016. Cerca nel catalogo
  • Watt, Jeremy; Borhani, Reza, Machine learning refinedrisorsa elettronicafoundations, algorithms, and applicationsJeremy Watt, Reza Borhani, Aggelos Katsaggelos. New York: Cambridge University Press, 2016. Cerca nel catalogo

Innovative teaching methods: Teaching and learning strategies
  • Lecturing
  • Laboratory
  • Problem based learning
  • Working in group
  • Problem solving
  • Auto correcting quizzes or tests for periodic feedback or exams
  • Use of online videos
  • Loading of files and pages (web pages, Moodle, ...)

Innovative teaching methods: Software or applications used
  • Moodle (files, quizzes, workshops, ...)
  • Kaltura (desktop video shooting, file loading on MyMedia Unipd)
  • Latex
  • Matlab
  • Unity

Sustainable Development Goals (SDGs)
Good Health and Well-Being Quality Education Industry, Innovation and Infrastructure Sustainable Cities and Communities