<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>derivative-free methods Archives - fullSTEAMahead365</title>
	<atom:link href="https://fullsteamahead365.com/tag/derivative-free-methods/feed/" rel="self" type="application/rss+xml" />
	<link>https://fullsteamahead365.com/tag/derivative-free-methods/</link>
	<description>Human Advancement Never Stops</description>
	<lastBuildDate>Wed, 21 Aug 2019 21:30:21 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
<site xmlns="com-wordpress:feed-additions:1">156403294</site>	<item>
		<title>Foundations of Computational Mathematics &#8211; Random Gradient-Free Minimization of Convex Functions</title>
		<link>https://fullsteamahead365.com/2019/08/26/foundations-of-computational-mathematics-random-gradient-free-minimization-of-convex-functions/</link>
					<comments>https://fullsteamahead365.com/2019/08/26/foundations-of-computational-mathematics-random-gradient-free-minimization-of-convex-functions/#respond</comments>
		
		<dc:creator><![CDATA[Bill Loguidice]]></dc:creator>
		<pubDate>Mon, 26 Aug 2019 21:29:20 +0000</pubDate>
				<category><![CDATA[Engineering/Mathematics]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[complexity bounds]]></category>
		<category><![CDATA[convex optimization]]></category>
		<category><![CDATA[derivative-free methods]]></category>
		<category><![CDATA[random methods]]></category>
		<category><![CDATA[stochastic optimization]]></category>
		<guid isPermaLink="false">https://fullsteamahead365.com/?p=1504</guid>

					<description><![CDATA[<p>From the journal, Foundations of Computational Mathematics, comes a paper on Random Gradient-Free Minimization of Convex Functions. This paper is free to read (link) through September 2019.  </p>
<p>The post <a href="https://fullsteamahead365.com/2019/08/26/foundations-of-computational-mathematics-random-gradient-free-minimization-of-convex-functions/">Foundations of Computational Mathematics &#8211; Random Gradient-Free Minimization of Convex Functions</a> appeared first on <a href="https://fullsteamahead365.com">fullSTEAMahead365</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>From the journal, <em>Foundations of Computational Mathematics</em>, comes a paper on <strong>Random Gradient-Free Minimization of Convex Functions</strong>. This paper is free to read (<a href="https://link.springer.com/article/10.1007/s10208-015-9296-2?sap-outbound-id=2CD06B7AA403BA98C16D249ED4695B3F132E3BBF&amp;utm_source=hybris-campaign&amp;utm_medium=email&amp;utm_campaign=000_KUND01_0000013904_SRMT_Centralized_10208&amp;utm_content=EN_internal_31000_20190821&amp;mkt-key=005056A5C6311ED999AA0A5933FFAAE7">link</a>) through September 2019.  </p>



<h2 class="wp-block-heading">Abstract</h2>



<p>In this paper, we prove new complexity bounds for methods of convex optimization based only on computation of the function value. The search directions of our schemes are normally distributed random Gaussian vectors. It appears that such methods usually need at most <em>n</em>times more iterations than the standard gradient methods, where <em>n</em> is the dimension of the space of variables. This conclusion is true for both nonsmooth and smooth problems. For the latter class, we present also an accelerated scheme with the expected rate of convergence </p>



<div class="wp-block-image"><figure class="aligncenter"><img data-recalc-dims="1" decoding="async" width="72" height="45" src="https://i0.wp.com/fullsteamahead365.com/wp-content/uploads/2019/08/RGfMoCF-01.png?resize=72%2C45&#038;ssl=1" alt="" class="wp-image-1506"/></figure></div>



<p>, where <em>k</em> is the iteration counter. For stochastic optimization, we propose a zero-order scheme and justify its expected rate of convergence </p>



<div class="wp-block-image"><figure class="aligncenter"><img data-recalc-dims="1" decoding="async" width="69" height="48" src="https://i0.wp.com/fullsteamahead365.com/wp-content/uploads/2019/08/RGfMoCF-02.png?resize=69%2C48&#038;ssl=1" alt="" class="wp-image-1507"/></figure></div>



<p>. We give also some bounds for the rate of convergence of the random gradient-free methods to stationary points of nonconvex functions, for both smooth and nonsmooth cases. Our theoretical results are supported by preliminary computational experiments. </p>
<p>The post <a href="https://fullsteamahead365.com/2019/08/26/foundations-of-computational-mathematics-random-gradient-free-minimization-of-convex-functions/">Foundations of Computational Mathematics &#8211; Random Gradient-Free Minimization of Convex Functions</a> appeared first on <a href="https://fullsteamahead365.com">fullSTEAMahead365</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://fullsteamahead365.com/2019/08/26/foundations-of-computational-mathematics-random-gradient-free-minimization-of-convex-functions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1504</post-id>	</item>
	</channel>
</rss>
