Hadzi-velkov, Zoran
Preferred name
Hadzi-velkov, Zoran
Official Name
Hadzi-velkov, Zoran
Main Affiliation
Email
zoranhv@feit.ukim.edu.mk
4 results
Now showing 1 - 4 of 4
- Some of the metrics are blocked by yourconsent settings
Item type:Publication, Wireless powered communication networks with imperfect channel state information and non-ideal circuit power consumtion(Journal of Electrical Engineering and Information Technologies, FEEIT, UKIM, 2018-12); - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Gradient Descent Methods for Regularized Optimization(Macedonian Academy of Sciences and Arts, 2024) ;Nikolovski, Filip; ; - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Gradient Descent Methods for Regularized Optimization(Macedonian Academy of Sciences and Arts, 2024) ;Nikolovski, Filip; ; Regularization is a widely recognized technique in mathematical optimization. It can be used to smooth out objective functions, refine the feasible solution set, or prevent overfitting in machine learning models. Due to its simplicity and robustness, the gradient descent (GD) method is one of the primary methods used for numerical optimization of differentiable objective functions. However, GD is not well-suited for solving l1 regularized optimization problems since these problems are non-differentiable at zero, causing iteration updates to oscillate or fail to converge. Instead, a more effective version of GD, called the proximal gradient descent employs a technique known as soft-thresholding to shrink the iteration updates toward zero, thus enabling sparsity in the solution. Motivated by the widespread applications of proximal GD in sparse and low-rank recovery across various engineering disciplines, we provide an overview of the GD and proximal GD methods for solving regularized optimization problems. Furthermore, this paper proposes a novel algorithm for the proximal GD method that incorporates a variable step size. Unlike conventional proximal GD, which uses a fixed step size based on the global Lipschitz constant, our method estimates the Lipschitz constant locally at each iteration and uses its reciprocal as the step size. This eliminates the need for a global Lipschitz constant, which can be impractical to compute. Numerical experiments we performed on synthetic and real-data sets show notable performance improvement of the proposed method compared to the conventional proximal GD with constant step size, both in terms of number of iterations and in time requirements. - Some of the metrics are blocked by yourconsent settings
Item type:Publication, Joint Mode Selection and Power Control in Hybrid Bit-Semantic Communications(Institute of Electrical and Electronics Engineers (IEEE), 2025-12); ; ;Evgenidis, Nikos G. ;Suraweera, Himal A.Karagiannidis, George K.
