Science in Progress is being phased out. The database was frozen at 12-03-2019, no new thesisses will be added here. Thesisses where publication is approved have been, and will be, published at: UvA Scripties Online

Artificial Intelligence in Progress

Home   Artificial Intell.   Bio-exact   Chemistry   Computer Sci. BSc   Earth Sciences   Mathematics   Physics & Astr.   Science Educ.   Submit   Webmaster        
bachelors   masters   all  


Displaying theses 1-10 of 510 total
Previous  1  2  3  4  5  6  7  8  9  10  11  Next  Search:

T.L. Pelsmaeker
Master programme: Artificial Intelligence February 21st, 2019
Institute: ILLC Research group: Language and Computation Graduation thesis Supervisor: Dr W.F. Aziz
Effective Estimation of Deep Generative Models of Language
In this thesis, we set out to model English sentences by means of statistical distributions. Specifically, we applied variational autoencoders to language, models that learn latent global representations of sentences, in order to generate novel sentences from this latent space. However, these models do not work out-of-the-box when applied to language. Hence, we investigate several methods to better optimise these models on a language modelling task, in order to improve their performance. Furthermore, we investigate various statistical distributions to model the latent space, enabling encoding of richer sentence representations. In the end, we show that we can model and generate correct and novel English sentences with variational autoencoders optimised with these techniques.
picture that illustrates the research done
Scientific abstract (pdf 89K)   Full text (pdf 1136K)

J.H. Winkens
Master programme: Artificial Intelligence February 14th, 2019
Institute: Informatics Institute Research group: Amsterdam Machine Learning Lab Graduation thesis Supervisor: Geert Litjens
Out-of-distribution detection for computational pathology with multi-head ensembles
Distribution shift is a common phenomenon in real-life safety-critical situations that is detrimental to the performance of current deep learning models. Constructing a principled method to detect such a shift is critical to building safe and predictable automated image analysis pipelines for medical imaging. In this work, we interpret the problem of out-of-distribution detection for computational pathology in an epistemic uncertainty estimation setting. Given the difficulty of obtaining a sufficiently multi-modal predictive distribution for uncertainty estimation, we present a multiple heads topology in CNNs as a highly diverse ensembling method. We empirically prove that the method exhibits greater representational diversity than various popular ensembling methods, such as MC dropout and Deep Ensembles. The fast gradient sign method is repurposed and we show that it separates the softmax scores of in-distribution samples and out-of-distribution samples. We identify the challenges for this task in the domain of computational pathology and extensively demonstrate the effectiveness of the proposed method on two clinically relevant tasks in this field.
picture that illustrates the research done
Scientific abstract (pdf 1K)   Full text (pdf 12355K)

E. van Krieken
Master programme: Artificial Intelligence February 12th, 2019
Institute: VU / Other Research group: Knowledge Representation and Reasoning Graduation thesis Supervisor: Frank van Harmelen
photo of the author
Differentiable Fuzzy Logics: Integrated Learning and Reasoning using Gradient Descent
In recent years there has been a push to integrate two approaches to AI. The first is Symbolic AI, which uses symbols to refer to concepts in the world. These methods are based on logic and are great for defining background domain knowledge. The second is Deep Learning, which has been applied with great success this decade. It learns how to act by learning from data. In particular, there is some research that combines Symbolic AI and Deep Learning by injecting the background knowledge written in Symbolic AI into the Deep Learning model. For example, we train a Deep Learning model to recognize objects in the world around it. Assume we have written down using Symbolic AI that ravens are black. If the Deep Learning model says it sees a white raven, Symbolic AI can correct the model by arguing that because it saw a raven, it had to be black and not white. In this thesis, we show that we can use a logic called Fuzzy Logic to inject background knowledge and improve the performance of Deep Learning models. Furthermore, we analyse why and how it should, or should not, work.
picture that illustrates the research done
Scientific abstract (pdf 1K)   Full text (pdf 5996K)

T.F.A. van der Ouderaa
Master programme: Artificial Intelligence January 31st, 2019
Institute: UvA / Other Research group: UvA / Other Graduation thesis Supervisor: Daniel E. Worrall
photo of the author
Reversible Networks for Memory-efficient Image-to-Image Translation in 3D Medical Imaging
The Pix2pix and CycleGAN losses have vastly improved the qualitative and quantitative visual quality of results in image-to-image translation tasks. We extend this framework by exploring approximately invertible architectures which are well suited to these losses. These architectures are approximately invertible by design and thus partially satisfy cycle-consistency before training even begins. Furthermore, since invertible architectures have constant memory complexity in depth, these models can be built arbitrarily deep. We are able to demonstrate superior quantitative output on the Cityscapes and Maps datasets. Additionally, we show that the model allows us to perform several memory-intensive medical imaging tasks, including a super-resolution problem on 3D MRI brain volumes. We also demonstrate that our model can perform a 3D domain-adaptation and 3D super-resolution task on chest CT volumes. By doing this, we provide a proof-of-principle for using reversible networks to create a model capable of pre-processing 3D CT scans to high resolution with a standardized appearance.
picture that illustrates the research done
Scientific abstract (pdf 1K)   Full text (pdf 19227K)

R.J.P. Bakker
Master programme: Artificial Intelligence December 19th, 2018
Institute: UvA / Other Research group: UvA / Other Graduation thesis Supervisor: Maarten Marx
photo of the author
Evolving Regular Expression Features for Text Classification with Genetic Programming
Text classification algorithms often rely on vocabulary counters like bag-of-words or character n-grams to represent text as a vector appropriate for use in machine learning algorithms. In this work, automatically generated regular expressions are proposed as an alternative feature set. The proposed algorithm uses genetic programming to evolve a set of regular expression features based on labeled text data and train a classifier in an end-to-end fashion. Though a comparison of the generated features and traditional text features indicates a classifier using generated features is not able to make better predictions, the generated features are able to capture patterns that cannot be found with the traditional features. As a result, a classifier combining traditional methods with generated features is able to improve significantly.
picture that illustrates the research done
Scientific abstract (pdf 1K)   Full text (pdf 2686K)

J.H.J. Linmans
Master programme: Artificial Intelligence December 13th, 2018
Institute: Informatics Institute Research group: Amsterdam Machine Learning Lab Graduation thesis Supervisor: Rianne van den Berg
Introspective Generative Modeling for Multimodal Translation
Developments on generative modeling have resulted in many successful models and related learning algorithms. However they have been less successful compared to models applied to discriminative tasks like classification. In general, generative modeling has been considered to be a much harder task compared to discriminative learning. However, not too many generative models sufficiently explore the representational power of discriminative modeling. In this thesis we study how to apply a new type of generative model, solely based on discriminative neural networks and apply it to tasks of domain translation between images and text. We show that the proposed model is able to generate high quality images based on an image description and is furthermore able to transform existing images based on a change in the corresponding caption.
picture that illustrates the research done
Scientific abstract (pdf 1K)   For more info or full text, mail to: r.vandenberg2@uva.nl

K.W. Korrel
Master programme: Artificial Intelligence November 26th, 2018
Institute: ILLC Research group: Language and Computation Graduation thesis Supervisor: Dieuwke Hupkes
From Sequence to Attention; Search for a Compositional Bias in Sequence-to-Sequence Models
Although sequence-to-sequence models have successfully been applied to many tasks, they are shown to have poor compositional skills. The principle of compositionality states that the meaning of a complex expression is a function only of its constituents and the manner in which they are combined. When a model would thus have an understanding of the individual constituents and can combine them in novels ways, this would allow for efficient learning and generalization. We first develop Attentive Guidance to show that guiding a sequence-to-sequence model in its attention modeling can help it find disentangled representations of the input symbols and to process them individually. Later we develop the sequence-to-attention architecture, a new model for sequence-to-sequence tasks with more emphasis on sparse attention modeling. We show that this architecture can find similar compositional solutions as can be developed with Attentive Guidance, without requiring attention annotations in the training data.
picture that illustrates the research done
Scientific abstract (pdf 2K)   Full text (pdf 1450K)

L.J. Pascha
Master programme: Artificial Intelligence November 26th, 2018
Institute: Informatics Institute Research group: QUVA Lab Graduation thesis Supervisor: Efstratios Gavves
Improving Word Embeddings for Zero-Shot Event Localisation by Combining Relational Knowledge with Distributional Semantics
Temporal event localisation of natural language text queries is a novel task in computer vision. Thus far, no consensus has been reached on how to predict the temporal boundaries of action segments precisely. While most attention in literature has been dedicated towards the representation of vision, here we attempt to improve the representation of language for event localisation by applying Graph Convolutions (GraphSAGE) on ConceptNet with distributional node embedding features. We argue that due to the large vocabulary size of language and currently small temporally sentence annotated datasets in scale and size, a high dependency is placed upon zero-shot performance. We hypothesise that our approach leads to more visually centred and structured language embeddings beneficial for this task. To test this, we design a wide-scale zero-shot dataset based on ImageNet to optimise our embeddings on and compare to other language embedding methods. State-of-the-art results are obtained on 5/17 popular intrinsic evaluation benchmarks, but with slightly lower performance on the TACoS dataset. Due to the almost complete overlap in train- and testset vocabulary, we deem additional testing necessary on a dataset that places more emphasis on word-relatedness; hypernyms, hyponyms and synonyms, which arguably makes language representation learning difficult.
picture that illustrates the research done
Scientific abstract (pdf 1K)   Full text (pdf 8953K)

T.E. Koenen
Master programme: Artificial Intelligence November 22nd, 2018
Institute: VU / Other Research group: Knowledge Representation and Reasoning Graduation thesis Supervisor: Peter Bloem
Text Generation and Annotation with Joint Multimodal Variational Autoencoders
Joint Multimodal Variational autoencoders are here applied to annotated text data and used to create a generative model over the two separate domains as well as allowing for cross domain mapping, by encoding mono-modal data and decoding the latent variable with the joint decoder.
picture that illustrates the research done
Scientific abstract (pdf 1K)   Full text (pdf 1784K)

J. Köhler
Master programme: Artificial Intelligence November 21st, 2018
Institute: Other Research group: Max Planck Institute for Intelligent Systems / EI Graduation thesis Supervisor: Efstratios Gavves
photo of the author
Differentially private data release by optimal compression
In this thesis we study how the principle of compressing data sets in an optimal way according to a measure of utility yields a mechanism that can be used to hide individuals within the set while still maintaining statistical usefulness for analysis tasks. As the problem of optimal compression is inherently difficult, we study two tractable instances that allow analytic sampling and analysis. Both approaches are further evaluated for their usefulness in practical applications. Finally, we sketch, how this work could be extended to less constrained assumptions on data set or utility measures.
picture that illustrates the research done
Scientific abstract (pdf 1K)   Full text (pdf 794K)

Previous  1  2  3  4  5  6  7  8  9  10  11  Next  

This page is maintained by thesis@science.uva.nl