Institute of Cognitive Science

Research Group Computer Vision


Navigation und Suche der Universität Osnabrück


Main content

Top content

Thesis topics in Computer Vision

This page lists topics for bachelor, master, and PhD theses in the Computer Vision group. The project descriptions are proposals that may be adapted based on your interests, background and expertise. This is not a closed list and additional ideas are always welcome. Feel free to contact us for further information or to discuss topics in detail.
re.photosBachelor / Master

Objective: In der Arbeitsgruppe Medieninformatik wurde das System re.photos entwickelt, mit dem historische und aktuelle Bilder einer Szene miteinander verglichen werden können. Um die Bildpaare soweit wie möglich zur Deckung zu bringen, wird eine Verzerrungs-Transformation angewendet. Dazu müssen derzeit von Hand Korrespondenzpunkte in beiden Bildern gesetzt werden. In der gemeinsam von Prof. Vornberger und Prof. Heidemann betreuten Arbeit sollen Computer Vision Algorithmen diese Aufgabe übernehmen oder unterstützen. Besonders wichtig ist dabei, die für Menschen auffälligen Elemente, wie beispielsweise Gebäudekanten vor hellem Himmel, zur Deckung zu bringen.

Further Information:


Contact:
Gunther Heidemann
3D point cloud segmentationBachelor / Master

Objective: Applying common image segmentation algorithm to dense 3D point clouds in order to achieve meaningful clusters. A possible task is to split the interior of the Delft Windmill point cloud into its objects.

Further Information:

  • Delft Windmill point cloud data.3tu.nl
  • Jiao, X., Zhang, H., & Wu, T. (2015). Mesh segmentation guided by seed points. Journal of Advanced Mechanical Design, Systems, and Manufacturing, 9(4), JAMDSM0051-JAMDSM0051.

Contact:
Julius Schöning / Ulf Krumnack
An accurate low cost eye trackerBachelor / Master

Objective: Design a high pixel accurate low cost eye tracker (<250€) that does not need a chinrest. Therefore the information provided by a RGB-D camera is used to correct e.g. head movement and correct the pixel information of an eye tracker device.

Further Information:


Contact:
Julius Schöning
Evaluation of Deep Learning FrameworksBachelor / Master

Objective: Develop and apply criteria to compare and evaluate existing deep learning frameworks. One challenge could be that these frameworks differing in conceptual setup, level of abstraction, ease of use, performance, stability, supported programming languages and many other aspects.

Further Information:


Contact:
Ulf Krumnack
Gesture based natural user interfaceBachelor / Master

Objective: The use of gestures in interaction with electronic devices becomes more popular with the availabilty of affordable 3D cameras or even ordinary webcams. This work may include a review of existing approaches, the comparison of 2D and 3D based techniques or the evaluation different gestures in a user study.

Further Information:


Contact:
Ulf Krumnack / Julius Schöning
Human Pose estimate (and classification) in video sequences for surveillance scenarioBachelor / Master

Objective: Improve body part detection in video sequence by constraint poselet conditioned pictorial structures from part-based models.

Further Information:

  • Mykhaylo Andriluka, Leonid Pishchulin, Peter V. Gehler, Bernt Schiele. 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2014) dx.doi.org
  • Leonid Pishchulin, Mykhaylo Andriluka, Peter V. Gehler and Bernt Schiele. Poselet Conditioned Pictorial Structures. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2013). dx.doi.org
  • Starting from 01/05/2016
  • Duration: 4-5 months

Contact:
Pattreeya Tanisaro
K-Means segmentation on videosBachelor / Master

Objective: Applying k-means color segmentation on HD video sequences (up to 5min) with using the benefits of current multicore architectures without running out of system resources (RAM, available HDD space etc.). Before multicore and memory optimization a simple test video should be defined and used for developing video k-means segmentation.

Further Information:

  • K-Means

Contact:
Julius Schöning / Ulf Krumnack
Local blur detectionBachelor / Master

Objective: Determine whether or not an area of interest in an image or video is blurred. Therefore existing algorithms for blur detection on images must be adapted to work on the specific area of interest.

Further Information:

  • Chen, X., Yang, J., Wu, Q., & Zhao, J. (2010, September). Motion blur detection based on lowest directional high-frequency energy. In Image Processing (ICIP), 2010 17th IEEE International Conference on (pp. 2533-2536). IEEE.
  • Narvekar, N. D., & Karam, L. J. (2011). A no-reference image blur metric based on the cumulative probability of blur detection (CPBD). Image Processing, IEEE Transactions on, 20(9), 2678-2683.

Contact:
Julius Schöning
Material perception by a computerMaster

Objective: Train a computer vision system to precept material information like “wood”, “stone” and “plastic” from images. Existing approaches should be reviewed and selected algorithms implemented, tested and finally improved.

Further Information:

  • Liu, C., Sharan, L., Adelson, E. H., & Rosenholtz, R. (2010, June). Exploring features in a bayesian framework for material recognition. In Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on (pp. 239-246). IEEE.
  • Adelson, E. H. (2001, June). On seeing stuff: the perception of materials by humans and machines. In Photonics West 2001-electronic imaging (pp. 1-12). International Society for Optics and Photonics.
  • Fleming, R. W. (2014). Visual perception of materials and their properties. Vision research, 94, 62-75.

Contact:
Julius Schöning / Ulf Krumnack
Occluded object checkerBachelor / Master

Objective: Predict whether or not a certain object in an images or video is completely visible. For this purpose meaningful bio-inspired features will be needed which should be defined as well as tested.

Further Information:

  • Kellman, P. J., & Spelke, E. S. (1983). Perception of partly occluded objects in infancy. Cognitive psychology, 15(4), 483-524

Contact:
Julius Schöning
Shot boundary detection in video sequence.Bachelor

Objective: Detecting the shot boundary helps us to re-initialize the process of object tracking in the video scenes. Since the general similarity measure does not work for gradual transition type such as dissolve, fade and wipe, therefore we will investigate the state-of-the-art techniques for spatial feature and temporal domain.

Further Information:

  • Starting: July/August 2016
  • SenGupta, A.; Thounaojam, D.M.; Manglem Singh, K.; Roy, S.. Video shot boundary detection: A review. Electrical, Computer and Communication Technologies (ICECCT), 2015 IEEE International Conference on , vol., no., pp.1-6, 5-7 March 2015 dx.doi.org

Contact:
Pattreeya Tanisaro
Subparts detection from motionMaster

Objective: How to detect subparts including their degrees of freedom from moving objects e.g. as video or in front of a camera. After an extensive literature research, the promising concepts should be analyzed critical. Finally a new/improved concept should be descripted.

Further Information:


Contact:
Julius Schöning / Ulf Krumnack
Subparts detection on 3D cloudsMaster

Objective: Based on corresponding images and videos a 3D point cloud should be split into meaningful subparts/limbs. For this features of the corresponding images as well as videos should be extracted and used for the segmentation of the 3D point cloud.

Further Information:

  • Jiao, X., Zhang, H., & Wu, T. (2015). Mesh segmentation guided by seed points. Journal of Advanced Mechanical Design, Systems, and Manufacturing, 9(4), JAMDSM0051-JAMDSM0051.

Contact:
Julius Schöning
Subparts detection on imagesBachelor / Master

Objective: Spitting object or merging segments into meaningful object subparts e.g. split a car into to subparts wheels and body. This can be done automatically by features like edges and color or semi-automatic with intuitive interactive metaphors.

Further Information:

  • Girshick, R. B., Felzenszwalb, P. F., & Mcallester, D. A. (2011). Object detection with grammar models. In Advances in Neural Information Processing Systems (pp. 442-450).

Contact:
Julius Schöning
Turning a webcam into an eye trackerBachelor / Master

Objective: Tacking user gaze with a “normal” webcam in real time. Based on an extensive literature research the best approaches should be benchmarked and finally combined into a prototype.

Further Information:


Contact:
Ulf Krumnack / Julius Schöning
Video SummarizationBachelor / Master

Objective: Assess the benefit of video summarization methods on a conceptual and/or practical level. A special focus can be the evaluation of recent approaches that introduce deep learning into that field.

Further Information:

  • Money, A. G., & Agius, H. (2008). Video summarisation: A conceptual framework and survey of the state of the art. Journal of Visual Communication and Image Representation, 19(2), 121-143.
  • Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., & Fei-Fei, L. (2014). Large-scale video classification with convolutional neural networks. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition (pp. 1725-1732).
  • Tran, D., Bourdev, L., Fergus, R., Torresani, L., & Paluri, M. (2014). Learning spatiotemporal features with 3d convolutional networks. arXiv preprint arXiv:1412.0767.

Contact:
Ulf Krumnack / Julius Schöning
Watershed segmentation on videosBachelor / Master

Objective: Applying watershed segmentation on HD video sequences (up to 5min) with using the benefits of current multicore architectures without running out of system resources (RAM, available HDD space etc.). Before multicore and memory optimization a simple test video should be defined and used for developing video watershed segmentation.Idea: Applying watershed segmentation in 3D space on video stream

Further Information:

  • Watershed

Contact:
Julius Schöning / Ulf Krumnack