Abdullah Akgül
Machine learning researcher with a strong publication record in reinforcement learning, deep learning, and probabilistic modeling. Focused on building machine learning systems that work beyond benchmarks, with experience developing open-source tools, solving real-world decision-making problems under uncertainty, and collaborating across disciplines. Quick to adapt to new domains.
selected publications
- ICLRBridging the performance-gap between target-free and target-based reinforcement learningT. Vincent, Y. Tripathi, T. Faust, A. Akgül, and 4 more authorsIn International Conference on Learning Representations, 2026
The use of target networks in deep reinforcement learning is a widely popular solution to mitigate the brittleness of semi-gradient approaches and stabilize learning. However, target networks notoriously require additional memory and delay the propagation of Bellman updates compared to an ideal target-free approach. In this work, we step out of the binary choice between target-free and target-based algorithms. We introduce a new method that uses a copy of the last linear layer of the online network as a target network, while sharing the remaining parameters with the up-to-date online network. This simple modification enables us to keep the target-free’s low-memory footprint while leveraging the target-based literature. We find that combining our approach with the concept of iterated Q-learning, which consists of learning consecutive Bellman updates in parallel, helps improve the sample-efficiency of target-free approaches. Our proposed method, iterated Shared Q-Learning (iS-QL), bridges the performance gap between target-free and target-based approaches across various problems while using a single Q-network, thus stepping towards resource-efficient reinforcement learning algorithms.
@inproceedings{vincent2026bridging, title = {Bridging the performance-gap between target-free and target-based reinforcement learning}, author = {Vincent, T. and Tripathi, Y. and Faust, T. and Akg{\"u}l, A. and Oren, Y. and Kandemir, M. and Peters, P. and D'Eramo, C.}, booktitle = {International Conference on Learning Representations}, year = {2026}, url = {https://openreview.net/pdf?id=ltcxS7JE0c}, } - NeurIPSDeterministic Uncertainty Propagation for Improved Model-Based Offline Reinforcement LearningA. Akgül, M. Haußmann, and M. KandemirIn Neural Information Processing Systems, 2024
Current approaches to model-based offline reinforcement learning often incorporate uncertainty-based reward penalization to address the distributional shift problem. These approaches, commonly known as pessimistic value iteration, use Monte Carlo sampling to estimate the Bellman target to perform temporal difference based policy evaluation. We find out that the randomness caused by this sampling step significantly delays convergence. We present a theoretical result demonstrating the strong dependency of suboptimality on the number of Monte Carlo samples taken per Bellman target calculation. Our main contribution is a deterministic approximation to the Bellman target that uses progressive moment matching, a method developed originally for deterministic variational inference. The resulting algorithm, which we call Moment Matching Offline Model-Based Policy Optimization (MOMBO), propagates the uncertainty of the next state through a nonlinear Q-network in a deterministic fashion by approximating the distributions of hidden layer activations by a normal distribution. We show that it is possible to provide tighter guarantees for the suboptimality of MOMBO than the existing Monte Carlo sampling approaches. We also observe MOMBO to converge faster than these approaches in a large set of benchmark tasks.
@inproceedings{akgul2024deterministic, title = {Deterministic Uncertainty Propagation for Improved Model-Based Offline Reinforcement Learning}, author = {Akg{\"u}l, A. and Hau{\ss}mann, M. and Kandemir, M.}, year = {2024}, booktitle = {Neural Information Processing Systems}, url = {https://proceedings.neurips.cc/paper_files/paper/2024/file/82240d93542b74d0c4fdffca39cb779f-Paper-Conference.pdf}, } - ICLREvidential Turing ProcessesM. Kandemir, A. Akgül, M. Haußmann, and G. UnalIn International Conference on Learning Representations, 2022
A probabilistic classifier with reliable predictive uncertainties i) fits successfully to the target domain data, ii) provides calibrated class probabilities in difficult regions of the target domain (e.g. class overlap), and iii) accurately identifies queries coming out of the target domain and reject them. We introduce an original combination of Evidential Deep Learning, Neural Processes, and Neural Turing Machines capable of providing all three essential properties mentioned above for total uncertainty quantification. We observe our method on three image classification benchmarks to consistently improve the in-domain uncertainty quantification, out-of-domain detection, and robustness against input perturbations with one single model. Our unified solution delivers an implementation-friendly and computationally efficient recipe for safety clearance and provides intellectual economy to an investigation of algorithmic roots of epistemic awareness in deep neural nets.
@inproceedings{kandemir2022evidential, title = {Evidential Turing Processes }, author = {Kandemir, M. and Akg{\"u}l, A. and Hau{\ss}mann, M. and Unal, G.}, booktitle = {International Conference on Learning Representations}, year = {2022}, url = {https://openreview.net/pdf?id=84NMXTHYe-}, } - FL-NeurIPSHow to Combine Variational Bayesian Networks in Federated LearningA. Ozer, K.B. Buldu, A. Akgül, and G. UnalIn Workshop on Federated Learning: Recent Advances and New Challenges (in Conjunction with NeurIPS 2022), 2022
Federated Learning enables multiple data centers to train a central model collaboratively without exposing any confidential data. Even though deterministic models are capable of performing high prediction accuracy, their lack of calibration and capability to quantify uncertainty is problematic for safety-critical applications. Different from deterministic models, probabilistic models such as Bayesian neural networks are relatively well-calibrated and able to quantify uncertainty alongside their competitive prediction accuracy. Both of the approaches appear in the federated learning framework; however, the aggregation scheme of deterministic models cannot be directly applied to probabilistic models since weights correspond to distributions instead of point estimates. In this work, we study the effects of various aggregation schemes for variational Bayesian neural networks. With empirical results on three image classification datasets, we observe that the degree of spread for an aggregated distribution is a significant factor in the learning process. Hence, we present an survey on the question of how to combine variational Bayesian networks in federated learning, while providing computer vision classification benchmarks for different aggregation settings.
@inproceedings{ozer2022fl, title = {How to Combine Variational Bayesian Networks in Federated Learning}, author = {Ozer, A. and Buldu, K.B. and Akg{\"u}l, A. and Unal, G.}, booktitle = {Workshop on Federated Learning: Recent Advances and New Challenges (in Conjunction with NeurIPS 2022)}, year = {2022}, url = {https://openreview.net/forum?id=AkPwb9dvAlP}, } - L4DCContinual Learning of Multi-modal Dynamics with External MemoryA. Akgül, G. Unal, and M. KandemirIn Proceedings of The 6th Annual Learning for Dynamics and Control Conference, 2024
We study the problem of fitting a model to a dynamical environment when new modes of behavior emerge sequentially. The learning model is aware when a new mode appears, but it does not have access to the true modes of individual training sequences. We devise a novel continual learning method that maintains a descriptor of the mode of an encountered sequence in a neural episodic memory. We employ a Dirichlet Process prior on the attention weights of the memory to foster efficient storage of the mode descriptors. Our method performs continual learning by transferring knowledge across tasks by retrieving the descriptors of similar modes of past tasks to the mode of a current sequence and feeding this descriptor into its transition kernel as control input. We observe the continual learning performance of our method to compare favorably to the mainstream parameter transfer approach.
@inproceedings{akgul2024cddp, title = {Continual Learning of Multi-modal Dynamics with External Memory}, author = {Akg{\"u}l, A. and Unal, G. and Kandemir, M.}, booktitle = {Proceedings of The 6th Annual Learning for Dynamics and Control Conference}, year = {2024}, url = {https://arxiv.org/abs/2203.00936}, } - TMLROvercoming Non-stationary Dynamics with Evidential Proximal Policy OptimizationA. Akgül, G. Baykal, M. Haußmann, and M. KandemirTransactions on Machine Learning Research, 2025
Continuous control of non-stationary environments is a major challenge for deep reinforcement learning algorithms. The time-dependency of the state transition dynamics aggravates the notorious stability problems of model-free deep actor-critic architectures. We posit that two properties will play a key role in overcoming non-stationarity in transition dynamics: (i) preserving the plasticity of the critic network and (ii) directed exploration for rapid adaptation to changing dynamics. We show that performing on-policy reinforcement learning with an evidential critic provides both. The evidential design ensures a fast and accurate approximation of the uncertainty around the state value, which maintains the plasticity of the critic network by detecting the distributional shifts caused by changes in dynamics. The probabilistic critic also makes the actor training objective a random variable, enabling the use of directed exploration approaches as a by-product. We name the resulting algorithm \emphEvidential Proximal Policy Optimization (EPPO) due to the integral role of evidential uncertainty quantification in both policy evaluation and policy improvement stages. Through experiments on non-stationary continuous control tasks, where the environment dynamics change at regular intervals, we demonstrate that our algorithm outperforms state-of-the-art on-policy reinforcement learning variants in both task-specific and overall return.
@article{akgul2025overcoming, title = {Overcoming Non-stationary Dynamics with Evidential Proximal Policy Optimization}, author = {Akg{\"u}l, A. and Baykal, G. and Hau{\ss}mann, M. and Kandemir, M.}, year = {2025}, journal = {Transactions on Machine Learning Research}, url = {https://openreview.net/forum?id=KTfTwxsVNE}, }