<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>convex optimization Archives - fullSTEAMahead365</title>
	<atom:link href="https://fullsteamahead365.com/tag/convex-optimization/feed/" rel="self" type="application/rss+xml" />
	<link>https://fullsteamahead365.com/tag/convex-optimization/</link>
	<description>Human Advancement Never Stops</description>
	<lastBuildDate>Wed, 21 Aug 2019 21:30:21 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
<site xmlns="com-wordpress:feed-additions:1">156403294</site>	<item>
		<title>Foundations of Computational Mathematics &#8211; Random Gradient-Free Minimization of Convex Functions</title>
		<link>https://fullsteamahead365.com/2019/08/26/foundations-of-computational-mathematics-random-gradient-free-minimization-of-convex-functions/</link>
					<comments>https://fullsteamahead365.com/2019/08/26/foundations-of-computational-mathematics-random-gradient-free-minimization-of-convex-functions/#respond</comments>
		
		<dc:creator><![CDATA[Bill Loguidice]]></dc:creator>
		<pubDate>Mon, 26 Aug 2019 21:29:20 +0000</pubDate>
				<category><![CDATA[Engineering/Mathematics]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[complexity bounds]]></category>
		<category><![CDATA[convex optimization]]></category>
		<category><![CDATA[derivative-free methods]]></category>
		<category><![CDATA[random methods]]></category>
		<category><![CDATA[stochastic optimization]]></category>
		<guid isPermaLink="false">https://fullsteamahead365.com/?p=1504</guid>

					<description><![CDATA[<p>From the journal, Foundations of Computational Mathematics, comes a paper on Random Gradient-Free Minimization of Convex Functions. This paper is free to read (link) through September 2019.  </p>
<p>The post <a href="https://fullsteamahead365.com/2019/08/26/foundations-of-computational-mathematics-random-gradient-free-minimization-of-convex-functions/">Foundations of Computational Mathematics &#8211; Random Gradient-Free Minimization of Convex Functions</a> appeared first on <a href="https://fullsteamahead365.com">fullSTEAMahead365</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>From the journal, <em>Foundations of Computational Mathematics</em>, comes a paper on <strong>Random Gradient-Free Minimization of Convex Functions</strong>. This paper is free to read (<a href="https://link.springer.com/article/10.1007/s10208-015-9296-2?sap-outbound-id=2CD06B7AA403BA98C16D249ED4695B3F132E3BBF&amp;utm_source=hybris-campaign&amp;utm_medium=email&amp;utm_campaign=000_KUND01_0000013904_SRMT_Centralized_10208&amp;utm_content=EN_internal_31000_20190821&amp;mkt-key=005056A5C6311ED999AA0A5933FFAAE7">link</a>) through September 2019.  </p>



<h2 class="wp-block-heading">Abstract</h2>



<p>In this paper, we prove new complexity bounds for methods of convex optimization based only on computation of the function value. The search directions of our schemes are normally distributed random Gaussian vectors. It appears that such methods usually need at most <em>n</em>times more iterations than the standard gradient methods, where <em>n</em> is the dimension of the space of variables. This conclusion is true for both nonsmooth and smooth problems. For the latter class, we present also an accelerated scheme with the expected rate of convergence </p>



<div class="wp-block-image"><figure class="aligncenter"><img data-recalc-dims="1" decoding="async" width="72" height="45" src="https://i0.wp.com/fullsteamahead365.com/wp-content/uploads/2019/08/RGfMoCF-01.png?resize=72%2C45&#038;ssl=1" alt="" class="wp-image-1506"/></figure></div>



<p>, where <em>k</em> is the iteration counter. For stochastic optimization, we propose a zero-order scheme and justify its expected rate of convergence </p>



<div class="wp-block-image"><figure class="aligncenter"><img data-recalc-dims="1" decoding="async" width="69" height="48" src="https://i0.wp.com/fullsteamahead365.com/wp-content/uploads/2019/08/RGfMoCF-02.png?resize=69%2C48&#038;ssl=1" alt="" class="wp-image-1507"/></figure></div>



<p>. We give also some bounds for the rate of convergence of the random gradient-free methods to stationary points of nonconvex functions, for both smooth and nonsmooth cases. Our theoretical results are supported by preliminary computational experiments. </p>
<p>The post <a href="https://fullsteamahead365.com/2019/08/26/foundations-of-computational-mathematics-random-gradient-free-minimization-of-convex-functions/">Foundations of Computational Mathematics &#8211; Random Gradient-Free Minimization of Convex Functions</a> appeared first on <a href="https://fullsteamahead365.com">fullSTEAMahead365</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://fullsteamahead365.com/2019/08/26/foundations-of-computational-mathematics-random-gradient-free-minimization-of-convex-functions/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1504</post-id>	</item>
		<item>
		<title>Foundations of Computational Mathematics &#8211; The Convex Geometry of Linear Inverse Problems</title>
		<link>https://fullsteamahead365.com/2019/08/23/foundations-of-computational-mathematics-the-convex-geometry-of-linear-inverse-problems/</link>
					<comments>https://fullsteamahead365.com/2019/08/23/foundations-of-computational-mathematics-the-convex-geometry-of-linear-inverse-problems/#respond</comments>
		
		<dc:creator><![CDATA[Bill Loguidice]]></dc:creator>
		<pubDate>Fri, 23 Aug 2019 21:12:32 +0000</pubDate>
				<category><![CDATA[Engineering/Mathematics]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[atomic norms]]></category>
		<category><![CDATA[convex optimization]]></category>
		<category><![CDATA[gaussian width]]></category>
		<category><![CDATA[real algebraic geometry]]></category>
		<category><![CDATA[semidefinite programming]]></category>
		<category><![CDATA[symmetry]]></category>
		<guid isPermaLink="false">https://fullsteamahead365.com/?p=1489</guid>

					<description><![CDATA[<p>From the journal, Foundations of Computational Mathematics, comes a paper on The Convex Geometry of Linear Inverse Problems. This paper is free to read (link) through September 2019. </p>
<p>The post <a href="https://fullsteamahead365.com/2019/08/23/foundations-of-computational-mathematics-the-convex-geometry-of-linear-inverse-problems/">Foundations of Computational Mathematics &#8211; The Convex Geometry of Linear Inverse Problems</a> appeared first on <a href="https://fullsteamahead365.com">fullSTEAMahead365</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>From the journal, <em>Foundations of Computational Mathematics</em>, comes a paper on <strong>The Convex Geometry of Linear Inverse Problems</strong>. This paper is free to read (<a href="https://link.springer.com/article/10.1007/s10208-012-9135-7?sap-outbound-id=2CD06B7AA403BA98C16D249ED4695B3F132E3BBF&amp;utm_source=hybris-campaign&amp;utm_medium=email&amp;utm_campaign=000_KUND01_0000013904_SRMT_Centralized_10208&amp;utm_content=EN_internal_31000_20190821&amp;mkt-key=005056A5C6311ED999AA0A5933FFAAE7">link</a>) through September 2019. </p>



<div class="wp-block-image"><figure class="aligncenter"><img data-recalc-dims="1" fetchpriority="high" decoding="async" width="640" height="396" src="https://i0.wp.com/fullsteamahead365.com/wp-content/uploads/2019/08/architecture-baroque-building-326784.jpg?resize=640%2C396&#038;ssl=1" alt="" class="wp-image-1492" srcset="https://i0.wp.com/fullsteamahead365.com/wp-content/uploads/2019/08/architecture-baroque-building-326784.jpg?resize=1024%2C634&amp;ssl=1 1024w, https://i0.wp.com/fullsteamahead365.com/wp-content/uploads/2019/08/architecture-baroque-building-326784.jpg?resize=300%2C186&amp;ssl=1 300w, https://i0.wp.com/fullsteamahead365.com/wp-content/uploads/2019/08/architecture-baroque-building-326784.jpg?resize=768%2C476&amp;ssl=1 768w, https://i0.wp.com/fullsteamahead365.com/wp-content/uploads/2019/08/architecture-baroque-building-326784.jpg?w=1280&amp;ssl=1 1280w" sizes="(max-width: 640px) 100vw, 640px" /></figure></div>



<h2 class="wp-block-heading">Abstract</h2>



<p>In applications throughout science and engineering one is often faced with the challenge of solving an ill-posed inverse problem, where the number of available measurements is smaller than the dimension of the model to be estimated. However in many practical situations of interest, models are constrained structurally so that they only have a few degrees of freedom relative to their ambient dimension. This paper provides a general framework to convert notions of simplicity into convex penalty functions, resulting in convex optimization solutions to linear, underdetermined inverse problems. The class of simple models considered includes those formed as the sum of a few atoms from some (possibly infinite) elementary atomic set; examples include well-studied cases from many technical fields such as sparse vectors (signal processing, statistics) and low-rank matrices (control, statistics), as well as several others including sums of a few permutation matrices (ranked elections, multiobject tracking), low-rank tensors (computer vision, neuroscience), orthogonal matrices (machine learning), and atomic measures (system identification). The convex programming formulation is based on minimizing the norm induced by the convex hull of the atomic set; this norm is referred to as the <em>atomic norm</em>. The facial structure of the atomic norm ball carries a number of favorable properties that are useful for recovering simple models, and an analysis of the underlying convex geometry provides sharp estimates of the number of generic measurements required for exact and robust recovery of models from partial information. These estimates are based on computing the Gaussian widths of tangent cones to the atomic norm ball. When the atomic set has algebraic structure the resulting optimization problems can be solved or approximated via semidefinite programming. The quality of these approximations affects the number of measurements required for recovery, and this tradeoff is characterized via some examples. Thus this work extends the catalog of simple models (beyond sparse vectors and low-rank matrices) that can be recovered from limited linear information via tractable convex programming. </p>
<p>The post <a href="https://fullsteamahead365.com/2019/08/23/foundations-of-computational-mathematics-the-convex-geometry-of-linear-inverse-problems/">Foundations of Computational Mathematics &#8211; The Convex Geometry of Linear Inverse Problems</a> appeared first on <a href="https://fullsteamahead365.com">fullSTEAMahead365</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://fullsteamahead365.com/2019/08/23/foundations-of-computational-mathematics-the-convex-geometry-of-linear-inverse-problems/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1489</post-id>	</item>
		<item>
		<title>Foundations of Computational Mathematics &#8211; Exact Matrix Completion via Convex Optimization</title>
		<link>https://fullsteamahead365.com/2019/08/22/foundations-of-computational-mathematics-exact-matrix-completion-via-convex-optimization/</link>
					<comments>https://fullsteamahead365.com/2019/08/22/foundations-of-computational-mathematics-exact-matrix-completion-via-convex-optimization/#respond</comments>
		
		<dc:creator><![CDATA[Bill Loguidice]]></dc:creator>
		<pubDate>Thu, 22 Aug 2019 20:58:21 +0000</pubDate>
				<category><![CDATA[Engineering/Mathematics]]></category>
		<category><![CDATA[Technology]]></category>
		<category><![CDATA[compressed sensing]]></category>
		<category><![CDATA[convex optimization]]></category>
		<category><![CDATA[decoupling]]></category>
		<category><![CDATA[duality in optimization]]></category>
		<category><![CDATA[low-rank matrices]]></category>
		<category><![CDATA[matrix completion]]></category>
		<category><![CDATA[noncummutative khintchine inequality]]></category>
		<category><![CDATA[nuclear norm minimization]]></category>
		<category><![CDATA[random matrices]]></category>
		<guid isPermaLink="false">https://fullsteamahead365.com/?p=1487</guid>

					<description><![CDATA[<p>From the journal, Foundations of Computational Mathematics, comes a paper on Exact Matrix Completion via Convex Optimization. This paper is free to read (link) through September 2019. </p>
<p>The post <a href="https://fullsteamahead365.com/2019/08/22/foundations-of-computational-mathematics-exact-matrix-completion-via-convex-optimization/">Foundations of Computational Mathematics &#8211; Exact Matrix Completion via Convex Optimization</a> appeared first on <a href="https://fullsteamahead365.com">fullSTEAMahead365</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>From the journal, <em>Foundations of Computational Mathematics</em>, comes a paper on <strong>Exact Matrix Completion via Convex Optimization</strong>. This paper is free to read (<a href="https://link.springer.com/article/10.1007/s10208-009-9045-5?sap-outbound-id=2CD06B7AA403BA98C16D249ED4695B3F132E3BBF&amp;utm_source=hybris-campaign&amp;utm_medium=email&amp;utm_campaign=000_KUND01_0000013904_SRMT_Centralized_10208&amp;utm_content=EN_internal_31000_20190821&amp;mkt-key=005056A5C6311ED999AA0A5933FFAAE7">link</a>) through September 2019. </p>



<h2 class="wp-block-heading">Abstract</h2>



<p>We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe <em>m</em> entries selected uniformly at random from a matrix <em><strong>M</strong></em>. Can we complete the matrix and recover the entries that we have not seen?We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number <em>m</em> of sampled entries obeys</p>



<div class="wp-block-image"><figure class="aligncenter"><img data-recalc-dims="1" loading="lazy" decoding="async" width="165" height="48" src="https://i0.wp.com/fullsteamahead365.com/wp-content/uploads/2019/08/EMCvCO.png?resize=165%2C48&#038;ssl=1" alt="" class="wp-image-1488" srcset="https://i0.wp.com/fullsteamahead365.com/wp-content/uploads/2019/08/EMCvCO.png?w=165&amp;ssl=1 165w, https://i0.wp.com/fullsteamahead365.com/wp-content/uploads/2019/08/EMCvCO.png?resize=160%2C48&amp;ssl=1 160w" sizes="auto, (max-width: 165px) 100vw, 165px" /></figure></div>



<p>for some positive numerical constant <em>C</em>, then with very high probability, most <em>n</em>×<em>n</em> matrices of rank <em>r</em> can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information. </p>
<p>The post <a href="https://fullsteamahead365.com/2019/08/22/foundations-of-computational-mathematics-exact-matrix-completion-via-convex-optimization/">Foundations of Computational Mathematics &#8211; Exact Matrix Completion via Convex Optimization</a> appeared first on <a href="https://fullsteamahead365.com">fullSTEAMahead365</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://fullsteamahead365.com/2019/08/22/foundations-of-computational-mathematics-exact-matrix-completion-via-convex-optimization/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1487</post-id>	</item>
	</channel>
</rss>
