<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>R | Blessing S. Ofori-Atta</title><link>https://boforiatta.netlify.app/tag/r/</link><atom:link href="https://boforiatta.netlify.app/tag/r/index.xml" rel="self" type="application/rss+xml"/><description>R</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><copyright>© B. Ofori-Atta 2023</copyright><item><title>DistGD</title><link>https://boforiatta.netlify.app/software/distgd/</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://boforiatta.netlify.app/software/distgd/</guid><description>&lt;p>The goal of DistGD (Distributed Gradient Descent) is to efficiently optimize a global objective function expressed as a sum of a list of local objective functions belonging to different agents situated in a network via a cluster architecture like Spark. You supply a list of local objective functions, weights of the connections between the agents, initialize a vector initial values, and it takes care of the details, returning the optimal values. See &lt;a href="https://github.com/bosafoagyare/DistGD#installation" target="_blank" rel="noopener">the github page&lt;/a> for more a vignette.&lt;/p></description></item></channel></rss>