DescriptionSince their introduction in 2014, Generative Adversarial Networks (GAN), have been a hot topic in the AI field. Although the original implementation of GAN is for the generation of images, researchers have progressed beyond that and have created GAN variants that can generate music, perform style transfer and much more. One pitfall of GANs is that they are challenging to train. Many tricks have since been suggested for improved training. In this thesis, we combine these techniques with a state-of-the-art GAN variant in an attempt to improve GAN performance. We implement the Variably Trained GAN (VT-GAN) that combines the features of an Auxiliary Classifier GAN with a deep convolutional neural network architecture, and Label Smoothing and Minibatch Discrimination layer techniques to improve and stabilize GAN training, to generate realistic high quality images. The evaluation metric used is the Inception Score that gives a quantitative value measuring the realness of an image. Although it takes longer for VT-GAN to train due to the addition of complex mathematical operations, for the same amount of training, the VT-GAN performs approximately 3\% better than the AC-GAN with respect to the Inception Score produced.