id
large_stringlengths
9
16
submitter
large_stringlengths
1
64
authors
large_stringlengths
4
60.7k
title
large_stringlengths
1
381
comments
large_stringlengths
1
827
journal-ref
large_stringlengths
1
557
doi
large_stringlengths
8
153
report-no
large_stringlengths
2
509
categories
large_stringlengths
5
125
license
large_stringclasses
9 values
abstract
large_stringlengths
6
5.67k
update_date
timestamp[ms]date
2007-05-23 00:00:00
2026-01-16 00:00:00
classification_label
stringclasses
2 values
is_new_dataset
bool
2 classes
confidence_score
float64
0.5
0.72
classification_date
stringdate
2026-01-25 00:43:33
2026-01-25 00:43:33
model_version
stringclasses
1 value
1704.01478
Vladimir Belov
D. Yu. Akimov, V. A. Belov, O. V. Borshchev, A. A. Burenkov, Yu. L. Grishkin, A. K. Karelin, A. V. Kuchenkov, A. N. Martemiyanov, S. A. Ponomarenko, G. E. Simakov, V. N. Stekhanov, N. M. Surin, V. S. Timoshin, O. Ya. Zeldovich
Test of SensL SiPM coated with NOL-1 wavelength shifter in liquid xenon
8 pages, 4 figures
null
10.1088/1748-0221/12/05/P05014
null
physics.ins-det
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A SensL MicroFC-SMT-60035 6x6 mm$^2$ silicon photo-multiplier coated with a NOL-1 wavelength shifter have been tested in the liquid xenon to detect the 175-nm scintillation light. For comparison, a Hamamatsu vacuum ultraviolet sensitive MPPC VUV3 3x3 mm$^2$ was tested under the same conditions. The photodetection efficiency of $13.1 \pm 2.5$% and $6.0 \pm 1.0$%, correspondingly, is obtained.
2017-05-31T00:00:00
no_new_dataset
false
0.704292
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1704.02637
Peter Ma
Peter C. Ma, Yu Lv, Matthias Ihme
Numerical methods to prevent pressure oscillations in transcritical flows
Annual Research Briefs 2016, Center for Turbulence Research, Stanford University
null
null
null
physics.flu-dyn physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The accurate and robust simulation of transcritical real-fluid effects is crucial for many engineering applications, such as fuel injection in internal combustion engines, rocket engines and gas turbines. For example, in diesel engines, the liquid fuel is injected into the ambient gas at a pressure that exceeds its critical value, and the fuel jet will be heated to a supercritical temperature before combustion takes place. This process is often referred to as transcritical injection. The largest thermodynamic gradient in the transcritical regime occurs as the fluid undergoes a liquid-like to a gas-like transition when crossing the pseudo-boiling line (Yang 2000, Oschwald et al. 2006, Banuti 2015). The complex processes during transcritical injection are still not well understood. Therefore, to provide insights into high-pressure combustion systems, accurate and robust numerical simulation tools are required for the characterization of supercritical and transcritical flows.
2017-05-31T00:00:00
no_new_dataset
false
0.709875
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1704.04091
Chao Zuo
Chao Zuo, Jiasong Sun, Jiaji Li, Jialin Zhang, Anand Asundi, Qian Chen
High-resolution transport-of-intensity quantitative phase microscopy with annular illumination
This manuscript was originally submitted on 20 Feb. 2017
null
null
null
physics.optics physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For quantitative phase imaging (QPI) based on transport-of-intensity equation (TIE), partially coherent illumination provides speckle-free imaging, compatibility with brightfield microscopy, and transverse resolution beyond coherent diffraction limit. Unfortunately, in a conventional microscope with circular illumination aperture, partial coherence tends to diminish the phase contrast, exacerbating the inherent noise-to-resolution tradeoff in TIE imaging, resulting in strong low-frequency artifacts and compromised imaging resolution. Here, we demonstrate how these issues can be effectively addressed by replacing the conventional circular illumination aperture with an annular one. The matched annular illumination not only strongly boosts the phase contrast for low spatial frequencies, but significantly improves the practical imaging resolution to near the incoherent diffraction limit. By incorporating high-numerical aperture (NA) illumination as well as high-NA objective, it is shown, for the first time, that TIE phase imaging can achieve a transverse resolution up to 208 nm, corresponding to an effective NA of 2.66. Time-lapse imaging of in vitro Hela cells revealing cellular morphology and subcellular dynamics during cells mitosis and apoptosis is exemplified. Given its capability for high-resolution QPI as well as the compatibility with widely available brightfield microscopy hardware, the proposed approach is expected to be adopted by the wider biology and medicine community.
2017-05-31T00:00:00
no_new_dataset
false
0.713297
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1704.04613
Yang Mingkun
Xiang Bai, Mingkun Yang, Pengyuan Lyu, Yongchao Xu and Jiebo Luo
Integrating Scene Text and Visual Appearance for Fine-Grained Image Classification
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Text in natural images contains rich semantics that are often highly relevant to objects or scene. In this paper, we focus on the problem of fully exploiting scene text for visual understanding. The main idea is combining word representations and deep visual features into a globally trainable deep convolutional neural network. First, the recognized words are obtained by a scene text reading system. Then, we combine the word embedding of the recognized words and the deep visual features into a single representation, which is optimized by a convolutional neural network for fine-grained image classification. In our framework, the attention mechanism is adopted to reveal the relevance between each recognized word and the given image, which further enhances the recognition performance. We have performed experiments on two datasets: Con-Text dataset and Drink Bottle dataset, that are proposed for fine-grained classification of business places and drink bottles, respectively. The experimental results consistently demonstrate that the proposed method combining textual and visual cues significantly outperforms classification with only visual representations. Moreover, we have shown that the learned representation improves the retrieval performance on the drink bottle images by a large margin, making it potentially useful in product search.
2017-05-31T00:00:00
no_new_dataset
false
0.65683
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1704.04648
Gerd Christian Krizek
Gerd Christian Krizek
Einstein's 1935 papers: EPR=ER?
43 pages, typos corrected
null
null
null
physics.hist-ph quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In May of 1935, Einstein published with two co-authors the famous EPR-paper about entangled particles, which questioned the completeness of Quantum Mechanics by means of a gedankenexperiment. Only one month later, he published a work that seems unconnected to the EPR-paper at first, the so called Einstein-Rosen-paper, that presented a solution of the field equations for particles in the framework of general relativity. Both papers ask for the conception of completeness in a theory and, from a modern perspective, it is easy to believe that there is a connection between these topics. We question whether Einstein might have considered that a correlation between nonlocal features of Quantum Mechanics and the Einstein-Rosen bridge can be used to explain entanglement. We analyse this question by discussing the used conceptions of "completeness," "atomistic structure of matter," and "quantum phenomena." We discuss the historical embedding of the two works and the context to modern research. Recent approaches are presented that formulate an EPR=ER principle and claim an equivalence of the basic principles of these two papers.
2017-05-31T00:00:00
no_new_dataset
false
0.710339
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1704.05825
Thomas K\"ollner
Thomas K\"ollner and Thomas Boeck and J\"org Schumacher
Thermal Rayleigh-Marangoni convection in a three-layer liquid-metal-battery model
null
Phys. Rev. E 95, 053114 (2017)
10.1103/PhysRevE.95.053114
null
physics.flu-dyn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The combined effects of buoyancy-driven Rayleigh-B\'{e}nard convection (RC) and surface tension-driven Marangoni convection (MC) are studied in a triple-layer configuration which serves as a simplified model for a liquid metal battery (LMB). The three-layer model consists of a liquid metal alloy cathode, a molten salt separation layer, and a liquid metal anode at the top. Convection is triggered by the temperature gradient between the hot electrolyte and the colder electrodes, which is a consequence of the release of resistive heat during operation. We present a linear stability analysis of the state of pure thermal conduction in combination with three-dimensional direct numerical simulations of the nonlinear turbulent evolution on the basis of a pseudospectral method. Five different modes of convection are identified in the configuration, which are partly coupled to each other: RC in the upper electrode, RC with internal heating in the molten salt layer, MC at both interfaces between molten salt and electrode as well as anti-convectionin the middle layer and lower electrode. The linear stability analysis confirms that the additional Marangoni effect in the present setup increases the growth rates of the linearly unstable modes, i.e. Marangoni and Rayleigh-B\'{e}nard instability act together in the molten salt layer. The critical Grashof and Marangoni numbers decrease with increasing middle layer thickness. The calculated thresholds for the onset of convection are found for realistic current densities of laboratory-sized LMBs. The global turbulent heat transfer follows scaling predictions for internally heated RC. The global turbulent momentum transfer is comparable with turbulent convection in the classical Rayleigh-B\'{e}nard case.
2017-05-31T00:00:00
no_new_dataset
false
0.709275
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1704.06512
Suyong Choi
Suyong Choi, Yunjun Kim, Youn Roh
Detection of Dark Photon Decaying into $e^+e^-$ using Cherenkov Radiation
null
null
null
null
physics.ins-det hep-ex
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In dark photon search experiments with electron beam-dumps, it is difficult to access the smaller dark photon life-time region of phase space due to enormous backgrounds from low-energy particles emerging from the target. In order to reduce the background, a thick beam-dump target is usually necessary. We propose to detect the Cherenkov radiation in gas due to ultra-relativistic electron and positron from dark photon decay. The secondary particles emerging from the beam dump have very little chance to produce such Cherenkov radiation in gas. Making use of the direction of the Cherenkov radiation, low background dark photon search with thinner target is possible. This would allow one to access challenging regions of the dark photon parameter space with low power electron beams and low-cost experimental setup.
2017-05-31T00:00:00
no_new_dataset
false
0.709517
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1704.07153
Mariusz Puchalski
Mariusz Puchalski, Jacek Komasa and Krzysztof Pachucki
Relativistic corrections for the ground electronic state of molecular hydrogen
null
Phys. Rev. A 95, 052506 (2017)
10.1103/PhysRevA.95.052506
null
physics.chem-ph physics.atom-ph physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We recalculate the leading relativistic corrections for the ground electronic state of the hydrogen molecule using variational method with explicitly correlated functions which satisfy the interelectronic cusp condition. The new computational approach allowed for the control of the numerical precision which reached about 8 significant digits. More importantly, the updated theoretical energies became discrepant with the known experimental values and we conclude that the yet unknown relativistic recoil corrections might be larger than previously anticipated.
2017-05-31T00:00:00
no_new_dataset
false
0.709358
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1704.08626
Jason Hindes
Jason Hindes and Ira B. Schwartz
Epidemic Extinction Paths in Complex Networks
null
Phys. Rev. E 95, 052317 (2017)
10.1103/PhysRevE.95.052317
null
physics.soc-ph cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the extinction of long-lived epidemics on finite complex networks induced by intrinsic noise. Applying analytical techniques to the stochastic Susceptible-Infected-Susceptible model, we predict the distribution of large fluctuations, the most probable, or optimal path through a network that leads to a disease-free state from an endemic state, and the average extinction time in general configurations. Our predictions agree with Monte-Carlo simulations on several networks, including synthetic weighted and degree-distributed networks with degree correlations, and an empirical high school contact network. In addition, our approach quantifies characteristic scaling patterns for the optimal path and distribution of large fluctuations, both near and away from the epidemic threshold, in networks with heterogeneous eigenvector centrality and degree distributions.
2017-05-31T00:00:00
no_new_dataset
false
0.711104
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.00664
Ryutaro Tanno
Ryutaro Tanno, Daniel E. Worrall, Aurobrata Ghosh, Enrico Kaden, Stamatios N. Sotiropoulos, Antonio Criminisi, Daniel C. Alexander
Bayesian Image Quality Transfer with CNNs: Exploring Uncertainty in dMRI Super-Resolution
Accepted paper at MICCAI 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we investigate the value of uncertainty modeling in 3D super-resolution with convolutional neural networks (CNNs). Deep learning has shown success in a plethora of medical image transformation problems, such as super-resolution (SR) and image synthesis. However, the highly ill-posed nature of such problems results in inevitable ambiguity in the learning of networks. We propose to account for intrinsic uncertainty through a per-patch heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference in the form of variational dropout. We show that the combined benefits of both lead to the state-of-the-art performance SR of diffusion MR brain images in terms of errors compared to ground truth. We further show that the reduced error scores produce tangible benefits in downstream tractography. In addition, the probabilistic nature of the methods naturally confers a mechanism to quantify uncertainty over the super-resolved output. We demonstrate through experiments on both healthy and pathological brains the potential utility of such an uncertainty measure in the risk assessment of the super-resolved images for subsequent clinical use.
2017-05-31T00:00:00
no_new_dataset
false
0.711017
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.01462
Naveen Mellempudi
Naveen Mellempudi, Abhisek Kundu, Dheevatsa Mudigere, Dipankar Das, Bharat Kaul, Pradeep Dubey
Ternary Neural Networks with Fine-Grained Quantization
null
null
null
null
cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel fine-grained quantization (FGQ) method to ternarize pre-trained full precision models, while also constraining activations to 8 and 4-bits. Using this method, we demonstrate a minimal loss in classification accuracy on state-of-the-art topologies without additional training. We provide an improved theoretical formulation that forms the basis for a higher quality solution using FGQ. Our method involves ternarizing the original weight tensor in groups of $N$ weights. Using $N=4$, we achieve Top-1 accuracy within $3.7\%$ and $4.2\%$ of the baseline full precision result for Resnet-101 and Resnet-50 respectively, while eliminating $75\%$ of all multiplications. These results enable a full 8/4-bit inference pipeline, with best-reported accuracy using ternary weights on ImageNet dataset, with a potential of $9\times$ improvement in performance. Also, for smaller networks like AlexNet, FGQ achieves state-of-the-art results. We further study the impact of group size on both performance and accuracy. With a group size of $N=64$, we eliminate $\approx99\%$ of the multiplications; however, this introduces a noticeable drop in accuracy, which necessitates fine tuning the parameters at lower precision. We address this by fine-tuning Resnet-50 with 8-bit activations and ternary weights at $N=64$, improving the Top-1 accuracy to within $4\%$ of the full precision result with $<30\%$ additional training overhead. Our final quantized model can run on a full 8-bit compute pipeline using 2-bit weights and has the potential of up to $15\times$ improvement in performance compared to baseline full-precision models.
2017-05-31T00:00:00
no_new_dataset
false
0.710705
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.05914
Stefano De Leo
Stefano De Leo, Manoel P. Ara\'ujo, Gabriel G. Maia
The oscillatory behavior of light in the composite Goos-Haenchen shift
12 pages, 4 figures
Phys. Rev. A 95, 053836 (2017)
10.1103/PhysRevA.95.053836
null
physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For incidence in the critical region, the propagation of gaussian lasers through triangular dielectric blocks is characterized by the joint action of angular deviations and lateral displacements. This mixed effect, known as composite Goos-Haenchen shift, produces a lateral displacement dependent on the axial coordinate, recently confirmed by a weak measurement experiment. We discuss under which conditions this axial lateral displacement, which only exists for the composite Goos-Haenchen shift, presents an oscillatory behavior. This oscillation phenomenon shows a peculiar behavior of light for critical incidence and, if experimentally tested, could stimulate further theoretical studies and lead to interesting optical applications.
2017-05-31T00:00:00
no_new_dataset
false
0.713234
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.06839
Chaoyang Wang
Chaoyang Wang, Hamed Kiani Galoogahi, Chen-Hsuan Lin, and Simon Lucey
Deep-LK for Efficient Adaptive Object Tracking
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a new approach for efficient regression based object tracking which we refer to as Deep- LK. Our approach is closely related to the Generic Object Tracking Using Regression Networks (GOTURN) framework of Held et al. We make the following contributions. First, we demonstrate that there is a theoretical relationship between siamese regression networks like GOTURN and the classical Inverse-Compositional Lucas & Kanade (IC-LK) algorithm. Further, we demonstrate that unlike GOTURN IC-LK adapts its regressor to the appearance of the currently tracked frame. We argue that this missing property in GOTURN can be attributed to its poor performance on unseen objects and/or viewpoints. Second, we propose a novel framework for object tracking - which we refer to as Deep-LK - that is inspired by the IC-LK framework. Finally, we show impressive results demonstrating that Deep-LK substantially outperforms GOTURN. Additionally, we demonstrate comparable tracking performance to current state of the art deep-trackers whilst being an order of magnitude (i.e. 100 FPS) computationally efficient.
2017-05-31T00:00:00
no_new_dataset
false
0.712148
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.07954
Alessandro Salandrino
Susobhan Das, Shima Fardad, Inki Kim, Junsuk Rho, Rongqing Hui, Alessandro Salandrino
Nanophotonic modal dichroism: mode-multiplexed modulators
null
Opt. Lett. 41, 4394-4397 (2016)
10.1364/OL.41.004394
null
physics.app-ph physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the diffraction limit is approached, device miniaturization to integrate more functionality per area becomes more and more challenging. Here we propose a novel strategy to increase the functionality-per-area by exploiting the modal properties of a waveguide system. With such approach the design of a mode-multiplexed nanophotonic modulator relying on the mode-selective absorption of a patterned Indium-Tin-Oxide is proposed. Full-wave simulations of a device operating at the telecom wavelength of 1550nm show that two modes can be independently modulated, while maintaining performances in line with conventional single-mode ITO modulators reported in the recent literature. The proposed design principles can pave the way to a novel class of mode-multiplexed compact photonic devices able to effectively multiply the functionality-per-area in integrated photonic systems.
2017-05-31T00:00:00
no_new_dataset
false
0.711717
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.08144
Regine Schuppe
Peter Fulde
Wavefunctions for large electronic systems
9 pages, 2 figures
null
null
null
physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Wavefunctions for large electron numbers suffer from an exponential growth of the Hilbert space which is required for their description. In fact, as pointed out by W. Kohn, for electron numbers $N > N_0$ where $N_0 \approx 10^3$ they become meaningless (exponential wall problem). Nevertheless, despite of the enormous successes of density functional theory, one would also like to develop electronic structure calculations for large systems based on wavefunctions. This is possible if one defines the latter in Liouville space with a cumulant metric rather than in Hilbert space. The cluster expansion of the free energy of a classical monoatomic gas makes it plausible that cumulants are a proper tool for electronic structure calculations.
2017-05-31T00:00:00
no_new_dataset
false
0.709875
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.08226
Vladimir Burdyuzha
Vladimir Burdyuzha
The Dark Components of the Universe Are Slowly Clarified
34 pages, 0 figures
JETP 124 (2017) 358-368 pp
10.1134/S1063776117020029
null
physics.gen-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The dark sector of the Universe is beginning to be clarified step by step. If the dark energy is vacuum energy, then 123 orders are exactly reduced by ordinary physical processes. For many years these unexplained orders were called a crisis in physics. There was indeed a "crisis" before the introduction of the holographic principle and entropic force in physics. The vacuum energy was spent for the organization of new microstates during the entire life of the Universe, but in the initial period of its evolution the vacuum energy (78 orders) were reduced more effectively by the vacuum condensates produced by phase transitions, because the Universe lost the high symmetry during its expansion. Important problems of physics and cosmology can be solved if the quarks, leptons, and gauge bosons are composite particles. The dark matter, partially or all consisting of familon-type pseudo-Goldstone bosons with a mass of 10^{-5} - 10^{-3} eV, can be explained in the composite model. Three generations of elementary particles are absolutely necessary in this model. In addition, this model realizes three relativistic phase transitions in a medium of familons at different red shifts, forming a large-scale structure of dark matter that was "repeated" by baryons. We predict the detection of dark matter dynamics, the detection of familons as dark matter particles, and the development of spectroscopy for dark medium due to the probable presence of dark atoms in it. Other viewpoints on the dark components of the Universe are also discussed briefly.
2017-05-31T00:00:00
no_new_dataset
false
0.711761
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.08286
Qibin Zhao Dr
Qibin Zhao, Masashi Sugiyama, Andrzej Cichocki
Learning Efficient Tensor Representations with Ring Structure Networks
arXiv admin note: substantial text overlap with arXiv:1606.05535
null
null
null
cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tensor train (TT) decomposition is a powerful representation for high-order tensors, which has been successfully applied to various machine learning tasks in recent years. However, since the tensor product is not commutative, permutation of data dimensions makes solutions and TT-ranks of TT decomposition inconsistent. To alleviate this problem, we propose a permutation symmetric network structure by employing circular multilinear products over a sequence of low-order core tensors. This network structure can be graphically interpreted as a cyclic interconnection of tensors, and thus we call it tensor ring (TR) representation. We develop several efficient algorithms to learn TR representation with adaptive TR-ranks by employing low-rank approximations. Furthermore, mathematical properties are investigated, which enables us to perform basic operations in a computationally efficiently way by using TR representations. Experimental results on synthetic signals and real-world datasets demonstrate that the proposed TR network is more expressive and consistently informative than existing TT networks.
2017-05-31T00:00:00
no_new_dataset
false
0.711066
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.08715
Harsh Beohar
Harsh Beohar and Sebastian K\"upper
On path-based coalgebras and weak notions of bisimulation
A long version (with proofs) of CALCO'17 paper
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
It is well known that the theory of coalgebras provides an abstract definition of behavioural equivalence that coincides with strong bisimulation across a wide variety of state-based systems. Unfortunately, the theory in the presence of so-called silent actions is not yet fully developed. In this paper, we give a coalgebraic characterisation of branching bisimulation in the context of labelled transition systems and fully probabilistic systems. It is shown that recording executions (up to a notion of stuttering), rather than the set of successor states, from a state is sufficient to characterise branching bisimulation in both cases.
2017-05-31T00:00:00
no_new_dataset
false
0.708741
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.09276
Yue Wang
Yue Wang and Yeye He
Synthesizing Mapping Relationships Using Table Corpus
The long version of a paper published at SIGMOD 2017
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mapping relationships, such as (country, country-code) or (company, stock-ticker), are versatile data assets for an array of applications in data cleaning and data integration like auto-correction and auto-join. However, today there are no good repositories of mapping tables that can enable these intelligent applications. Given a corpus of tables such as web tables or spreadsheet tables, we observe that values of these mappings often exist in pairs of columns in same tables. Motivated by their broad applicability, we study the problem of synthesizing mapping relationships using a large table corpus. Our synthesis process leverages compatibility of tables based on co-occurrence statistics, as well as constraints such as functional dependency. Experiment results using web tables and enterprise spreadsheets suggest that the proposed approach can produce high quality mappings.
2017-05-31T00:00:00
no_new_dataset
false
0.708769
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.09499
Scientific Information Service CERN
V. Baglin, P. Chiggiato, P. Cruikshank, M. Gallilee, C. Garion and R. Kersevan
Vacuum System
11 pages, chapter 12 in High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary Design Report
CERN Yellow Report CERN 2015-005, pp.195-205
10.5170/CERN-2015-005.195
null
physics.acc-ph
http://creativecommons.org/licenses/by/4.0/
Chapter 12 in High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary Design Report. The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase its luminosity (rate of collisions) by a factor of five beyond the original design value and the integrated luminosity (total collisions created) by a factor ten. The LHC is already a highly complex and exquisitely optimised machine so this upgrade must be carefully conceived and will require about ten years to implement. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting magnets, compact superconducting cavities for beam rotation with ultra-precise phase control, new technology and physical processes for beam collimation and 300 metre-long high-power superconducting links with negligible energy dissipation. The present document describes the technologies and components that will be used to realise the project and is intended to serve as the basis for the detailed engineering design of HL-LHC.
2017-05-31T00:00:00
no_new_dataset
false
0.710946
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.09501
Scientific Information Service CERN
E. Bravin, B. Dehning, R. Jones, T. Lefevre and H. Schmickler
Beam Instrumentation and Long-Range Beam-Beam Compensation
14 pages, chapter 13 in High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary Design Report
CERN Yellow Report CERN 2015-005, pp. 207-220
10.5170/CERN-2015-005.207
null
physics.acc-ph
http://creativecommons.org/licenses/by/4.0/
Chapter 13 in High-Luminosity Large Hadron Collider (HL-LHC) : Preliminary Design Report. The Large Hadron Collider (LHC) is one of the largest scientific instruments ever built. Since opening up a new energy frontier for exploration in 2010, it has gathered a global user community of about 7,000 scientists working in fundamental particle physics and the physics of hadronic matter at extreme temperature and density. To sustain and extend its discovery potential, the LHC will need a major upgrade in the 2020s. This will increase its luminosity (rate of collisions) by a factor of five beyond the original design value and the integrated luminosity (total collisions created) by a factor ten. The LHC is already a highly complex and exquisitely optimised machine so this upgrade must be carefully conceived and will require about ten years to implement. The new configuration, known as High Luminosity LHC (HL-LHC), will rely on a number of key innovations that push accelerator technology beyond its present limits. Among these are cutting-edge 11-12 tesla superconducting magnets, compact superconducting cavities for beam rotation with ultra-precise phase control, new technology and physical processes for beam collimation and 300 metre-long high-power superconducting links with negligible energy dissipation. The present document describes the technologies and components that will be used to realise the project and is intended to serve as the basis for the detailed engineering design of HL-LHC.
2017-05-31T00:00:00
no_new_dataset
false
0.710534
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.09584
Scientific Information Service CERN
V. Malka
Plasma Wake Accelerators: Introduction and Historical Overview
28 pages, CAS - CERN Accelerator School: Plasma Wake Acceleration, CERN, Geneva, Switzerland, 23 - 29 Nov 2014
CERN Yellow Report CERN-2016-001, pp.1-28
10.5170/CERN-2016-001.1
null
physics.acc-ph
http://creativecommons.org/licenses/by/4.0/
Fundamental questions on the nature of matter and energy have found answers thanks to the use of particle accelerators. Societal applications, such as cancer treatment or cancer imaging, illustrate the impact of accelerators in our current life. Today, accelerators use metallic cavities that sustain electricfields with values limited to about 100 MV/m. Because of their ability to support extreme accelerating gradients, the plasma medium has recently been proposed for future cavity-like accelerating structures. This contribution highlights the tremendous evolution of plasma accelerators driven by either laser or particle beams that allow the production of high quality particle beams with a degree of tunability and a set of parameters that make them very pertinent for many applications.
2017-05-31T00:00:00
no_new_dataset
false
0.709778
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.09696
Domingos Soares
Domingos S. L. Soares, Marcos C. D. Neves, Andre K. T. Assis
Arp's Indomitable Universe
9 pages, 4 figures, pp. 185-197 of the book "The Galileo of Palomar: Essays in Memory of Halton Arp" (Apeiron, Montreal, 2017)
null
null
null
physics.hist-ph astro-ph.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present some aspects of the work and personality of Halton Christian Arp (1927-2013).
2017-05-31T00:00:00
no_new_dataset
false
0.70782
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.09864
Haojin Yang
Haojin Yang, Martin Fritzsche, Christian Bartz, Christoph Meinel
BMXNet: An Open-Source Binary Neural Network Implementation Based on MXNet
4 pages
null
null
null
cs.LG cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Binary Neural Networks (BNNs) can drastically reduce memory size and accesses by applying bit-wise operations instead of standard arithmetic operations. Therefore it could significantly improve the efficiency and lower the energy consumption at runtime, which enables the application of state-of-the-art deep learning models on low power devices. BMXNet is an open-source BNN library based on MXNet, which supports both XNOR-Networks and Quantized Neural Networks. The developed BNN layers can be seamlessly applied with other standard library components and work in both GPU and CPU mode. BMXNet is maintained and developed by the multimedia research group at Hasso Plattner Institute and released under Apache license. Extensive experiments validate the efficiency and effectiveness of our implementation. The BMXNet library, several sample projects, and a collection of pre-trained binary deep models are available for download at https://github.com/hpi-xnor
2017-05-31T00:00:00
no_new_dataset
false
0.7108
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.09899
Zeerak Butt
Zeerak Waseem, Thomas Davidson, Dana Warmsley, Ingmar Weber
Understanding Abuse: A Typology of Abusive Language Detection Subtasks
To appear in the proceedings of the 1st Workshop on Abusive Language Online. Please cite that version
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the body of research on abusive language detection and analysis grows, there is a need for critical consideration of the relationships between different subtasks that have been grouped under this label. Based on work on hate speech, cyberbullying, and online abuse we propose a typology that captures central similarities and differences between subtasks and we discuss its implications for data annotation and feature construction. We emphasize the practical actions that can be taken by researchers to best approach their abusive language detection subtask of interest.
2017-05-31T00:00:00
no_new_dataset
false
0.709994
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.09929
Suraiya Jabin
Mudasir Ahmad Wani, Suraiya Jabin
A sneak into the Devil's Colony - Fake Profiles in Online Social Networks
31 pages, 8 figures
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online Social Networks (OSNs) play an important role for internet users to carry out their daily activities like content sharing, news reading, posting messages, product reviews and discussing events etc. At the same time, various kinds of spammers are also equally attracted towards these OSNs. These cyber criminals including sexual predators, online fraudsters, advertising campaigners, catfishes, and social bots etc. exploit the network of trust by various means especially by creating fake profiles to spread their content and carry out scams. All these malicious identities are very harmful for both the users as well as the service providers. From the OSN service provider point of view, fake profiles affect the overall reputation of the network in addition to the loss of bandwidth. To spot out these malicious users, huge manpower effort and more sophisticated automated methods are needed. In this paper, various types of OSN threat generators like compromised profiles, cloned profiles and online bots (spam bots, social bots, like bots and influential bots) have been classified. An attempt is made to present several categories of features that have been used to train classifiers in order to identify a fake profile. Different data crawling approaches along with some existing data sources for fake profile detection have been identified. A refresher on existing cyber laws to curb social media based cyber crimes with their limitations is also presented.
2017-05-31T00:00:00
no_new_dataset
false
0.714977
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10129
Tobias Wenger
Tobias Wenger, Giovanni Viola, Jari Kinaret, Mikael Fogelstr\"om, and Philippe Tassin
High-sensitivity plasmonic refractive index sensing using graphene
This is an author-created, un-copyedited version of an article accepted for publication/published in 2DMaterials. IOP Publishing Ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it. The Version of Record is available online at https://doi.org/10.1088/2053-1583/aa70ff
2DMaterials, 4, 025103 (2017)
10.1088/2053-1583/aa70ff
null
cond-mat.mes-hall physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We theoretically demonstrate a high-sensitivity, graphene-plasmon based refractive index sensor working in the mid-infrared at room temperature. The bulk figure of merit of our sensor reaches values above $10$, but the key aspect of our proposed plasmonic sensor is its surface sensitivity which we examine in detail. We have used realistic values regarding doping level and electron relaxation time, which is the limiting factor for the sensor performance. Our results show quantitatively the high performance of graphene-plasmon based refractive index sensors working in the mid-infrared.
2017-05-31T00:00:00
no_new_dataset
false
0.712398
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10134
Egor Malykh
Egor Malykh, Sergey Novoselov, Oleg Kudashev
On Residual CNN in text-dependent speaker verification task
Accepted for Specom 2017
null
null
null
cs.SD cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning approaches are still not very common in the speaker verification field. We investigate the possibility of using deep residual convolutional neural network with spectrograms as an input features in the text-dependent speaker verification task. Despite the fact that we were not able to surpass the baseline system in quality, we achieved a quite good results for such a new approach getting an 5.23% ERR on the RSR2015 evaluation part. Fusion of the baseline and proposed systems outperformed the best individual system by 18% relatively.
2017-05-31T00:00:00
no_new_dataset
false
0.711432
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10182
Taiji Suzuki
Taiji Suzuki
Fast learning rate of deep learning via a kernel perspective
36 pages
null
null
null
math.ST cs.LG stat.ML stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a new theoretical framework to analyze the generalization error of deep learning, and derive a new fast learning rate for two representative algorithms: empirical risk minimization and Bayesian deep learning. The series of theoretical analyses of deep learning has revealed its high expressive power and universal approximation capability. Although these analyses are highly nonparametric, existing generalization error analyses have been developed mainly in a fixed dimensional parametric model. To compensate this gap, we develop an infinite dimensional model that is based on an integral form as performed in the analysis of the universal approximation capability. This allows us to define a reproducing kernel Hilbert space corresponding to each layer. Our point of view is to deal with the ordinary finite dimensional deep neural network as a finite approximation of the infinite dimensional one. The approximation error is evaluated by the degree of freedom of the reproducing kernel Hilbert space in each layer. To estimate a good finite dimensional model, we consider both of empirical risk minimization and Bayesian deep learning. We derive its generalization error bound and it is shown that there appears bias-variance trade-off in terms of the number of parameters of the finite dimensional approximation. We show that the optimal width of the internal layers can be determined through the degree of freedom and the convergence rate can be faster than $O(1/\sqrt{n})$ rate which has been shown in the existing studies.
2017-05-31T00:00:00
no_new_dataset
false
0.709278
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10286
Debashish Chowdhury
Colin D. Kinz-Thompson, Ajeet K. Sharma, Joachim Frank, Ruben L. Gonzalez, Jr., Debashish Chowdhury
Quantitative Connection Between Ensemble Thermodynamics and Single-Molecule Kinetics: A Case Study Using Cryogenic Electron Microscopy and Single-Molecule Fluorescence Resonance Energy Transfer Investigations of the Ribosome
43 pages, including 6 figures
Journal of Physical Chemistry B (ACS, USA, 2015), vol. 119, 10888 (2015)
10.1021/jp5128805
null
physics.bio-ph cond-mat.stat-mech physics.chem-ph q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
At equilibrium, thermodynamic and kinetic information can be extracted from biomolecular energy landscapes by many techniques. However, while static, ensemble techniques yield thermodynamic data, often only dynamic, single-molecule techniques can yield the kinetic data that describes transition-state energy barriers. Here we present a generalized framework based upon dwell-time distributions that can be used to connect such static, ensemble techniques with dynamic, single-molecule techniques, and thus characterize energy landscapes to greater resolutions. We demonstrate the utility of this framework by applying it to cryogenic electron microscopy (cryo-EM) and single-molecule fluorescence resonance energy transfer (smFRET) studies of the bacterial ribosomal pre-translocation complex. Among other benefits, application of this framework to these data explains why two transient, intermediate conformations of the pre-translocation complex, which are observed in a cryo-EM study, may not be observed in several smFRET studies.
2017-05-31T00:00:00
no_new_dataset
false
0.70788
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10311
Abhay Shah
Junjie Bai, Abhay Shah and Xiaodong Wu
Optimal Multi-Object Segmentation with Novel Gradient Vector Flow Based Shape Priors
Paper in review
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Shape priors have been widely utilized in medical image segmentation to improve segmentation accuracy and robustness. A major way to encode such a prior shape model is to use a mesh representation, which is prone to causing self-intersection or mesh folding. Those problems require complex and expensive algorithms to mitigate. In this paper, we propose a novel shape prior directly embedded in the voxel grid space, based on gradient vector flows of a pre-segmentation. The flexible and powerful prior shape representation is ready to be extended to simultaneously segmenting multiple interacting objects with minimum separation distance constraint. The problem is formulated as a Markov random field problem whose exact solution can be efficiently computed with a single minimum s-t cut in an appropriately constructed graph. The proposed algorithm is validated on two multi-object segmentation applications: the brain tissue segmentation in MRI images, and the bladder/prostate segmentation in CT images. Both sets of experiments show superior or competitive performance of the proposed method to other state-of-the-art methods.
2017-05-31T00:00:00
no_new_dataset
false
0.710734
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10313
Alexander W. Winkler
Alexander W Winkler, Farbod Farshidian, Diego Pardo, Michael Neunert and Jonas Buchli
Fast Trajectory Optimization for Legged Robots using Vertex-based ZMP Constraints
currently under review for IEEE RA-L
null
null
null
cs.RO math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper combines the fast Zero-Moment-Point (ZMP) approaches that work well in practice with the broader range of capabilities of a Trajectory Optimization formulation, by optimizing over body motion, footholds and Center of Pressure simultaneously. We introduce a vertex-based representation of the support-area constraint, which can treat arbitrarily oriented point-, line-, and area-contacts uniformly. This generalization allows us to create motions such quadrupedal walking, trotting, bounding, pacing, combinations and transitions between these, limping, bipedal walking and push-recovery all with the same approach. This formulation constitutes a minimal representation of the physical laws (unilateral contact forces) and kinematic restrictions (range of motion) in legged locomotion, which allows us to generate various motion in less than a second. We demonstrate the feasibility of the generated motions on a real quadruped robot.
2017-05-31T00:00:00
no_new_dataset
false
0.709221
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10342
Thomas Lukasiewicz
Patrick Hohenecker and Thomas Lukasiewicz
Deep Learning for Ontology Reasoning
9 pages
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we present a novel approach to ontology reasoning that is based on deep learning rather than logic-based formal reasoning. To this end, we introduce a new model for statistical relational learning that is built upon deep recursive neural networks, and give experimental evidence that it can easily compete with, or even outperform, existing logic-based reasoners on the task of ontology reasoning. More precisely, we compared our implemented system with one of the best logic-based ontology reasoners at present, RDFox, on a number of large standard benchmark datasets, and found that our system attained high reasoning quality, while being up to two orders of magnitude faster.
2017-05-31T00:00:00
no_new_dataset
false
0.711578
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10368
Jose Eduardo Novoa Ilic
Jos\'e Novoa, Josu\'e Fredes and N\'estor Becerra Yoma
DNN-based uncertainty estimation for weighted DNN-HMM ASR
null
null
null
null
cs.SD cs.NE
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, the uncertainty is defined as the mean square error between a given enhanced noisy observation vector and the corresponding clean one. Then, a DNN is trained by using enhanced noisy observation vectors as input and the uncertainty as output with a training database. In testing, the DNN receives an enhanced noisy observation vector and delivers the estimated uncertainty. This uncertainty in employed in combination with a weighted DNN-HMM based speech recognition system and compared with an existing estimation of the noise cancelling uncertainty variance based on an additive noise model. Experiments were carried out with Aurora-4 task. Results with clean, multi-noise and multi-condition training are presented.
2017-05-31T00:00:00
no_new_dataset
false
0.709549
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10375
Bekir Sait Ciftler
Bekir Sait Ciftler and Adem Tuncer and Ismail Guvenc
Indoor UAV Navigation to a Rayleigh Fading Source Using Q-Learning
3 pages, 4 figures, in review for IEEE IoTJ
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unmanned aerial vehicles (UAVs) can be used to localize victims, deliver first-aid, and maintain wireless connectivity to victims and first responders during search/rescue and public safety scenarios. In this letter, we consider the problem of navigating a UAV to a Rayleigh fading wireless signal source, e.g. the Internet-of-Things (IoT) devices such as smart watches and other wearables owned by the victim in an indoor environment. The source is assumed to transmit RF signals, and a Q-learning algorithm is used to navigate the UAV to the vicinity of the source. Our results show that the time averaging window and the exploration rate for the Q-learning algorithm can be optimized for fastest navigation of the UAV to the IoT device. As a result, Q-learning achieves the best performance with smaller convergence time overall.
2017-05-31T00:00:00
no_new_dataset
false
0.712005
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10385
Minje Kim
Minje Kim
Collaborative Deep Learning for Speech Enhancement: A Run-Time Model Selection Method Using Autoencoders
null
Proc. of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 76-80, March 2017
null
null
cs.SD cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that a Modular Neural Network (MNN) can combine various speech enhancement modules, each of which is a Deep Neural Network (DNN) specialized on a particular enhancement job. Differently from an ordinary ensemble technique that averages variations in models, the propose MNN selects the best module for the unseen test signal to produce a greedy ensemble. We see this as Collaborative Deep Learning (CDL), because it can reuse various already-trained DNN models without any further refining. In the proposed MNN selecting the best module during run time is challenging. To this end, we employ a speech AutoEncoder (AE) as an arbitrator, whose input and output are trained to be as similar as possible if its input is clean speech. Therefore, the AE can gauge the quality of the module-specific denoised result by seeing its AE reconstruction error, e.g. low error means that the module output is similar to clean speech. We propose an MNN structure with various modules that are specialized on dealing with a specific noise type, gender, and input Signal-to-Noise Ratio (SNR) value, and empirically prove that it almost always works better than an arbitrarily chosen DNN module and sometimes as good as an oracle result.
2017-05-31T00:00:00
no_new_dataset
false
0.708639
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10394
Giuliano Gadioli La Guardia
Pedro J. Miranda and Giuliano La Guardia
On a relational theory of biological systems: a natural model for complex biological behavior
null
null
null
null
nlin.AO physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we develop a natural (empirical) relational theory for describing and modeling complex biological phenomena. We have as stepping stone the assertion: function implies structure. The theory is built upon a graph's theory structure in which a diffusion model of information takes place, and where dynamics can be investigated in order to generate steady quantifiers. In this context, we improve a seminal work by adding a free context biological importance measure given by the Shannon's Entropy. We also introduce the concept of biological loci. Such concept stands for closely related biological agents which plays a role as an agent by itself. Our results allow us to synthesize a natural model for complex biological behavior that takes into account: system's update, irreducibility, and exploit of the dynamical behavior mounted over a diffusion model. The model deals in final terms to its natural capacity to model plasticity and environmental changes, which has an intrinsic relationship with Shannon's Entropy and the sort of dynamics that biological systems can display.
2017-05-31T00:00:00
no_new_dataset
false
0.70985
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10396
Zachary Friggstad
Sara Ahmadian and Zachary Friggstad
Further Approximations for Demand Matching: Matroid Constraints and Minor-Closed Graphs
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We pursue a study of the Generalized Demand Matching problem, a common generalization of the $b$-Matching and Knapsack problems. Here, we are given a graph with vertex capacities, edge profits, and asymmetric demands on the edges. The goal is to find a maximum-profit subset of edges so the demands of chosen edges do not violate vertex capacities. This problem is APX-hard and constant-factor approximations are known. Our results fall into two categories. First, using iterated relaxation and various filtering strategies, we show with an efficient rounding algorithm if an additional matroid structure $\mathcal M$ is given and we further only allow sets $F \subseteq E$ that are independent in $\mathcal M$, the natural LP relaxation has an integrality gap of at most $\frac{25}{3} \approx 8.333$. This can be improved in various special cases, for example we improve over the 15-approximation for the previously-studied Coupled Placement problem [Korupolu et al. 2014] by giving a $7$-approximation. Using similar techniques, we show the problem of computing a minimum-cost base in $\mathcal M$ satisfying vertex capacities admits a $(1,3)$-bicriteria approximation. This improves over the previous $(1,4)$-approximation in the special case that $\mathcal M$ is the graphic matroid over the given graph [Fukanaga and Nagamochi, 2009]. Second, we show Demand Matching admits a polynomial-time approximation scheme in graphs that exclude a fixed minor. If all demands are polynomially-bounded integers, this is somewhat easy using dynamic programming in bounded-treewidth graphs. Our main technical contribution is a sparsification lemma allowing us to scale the demands to be used in a more intricate dynamic programming algorithm, followed by randomized rounding to filter our scaled-demand solution to a feasible solution.
2017-05-31T00:00:00
no_new_dataset
false
0.707546
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10404
Cun Mu
Cun Mu, Daniel Hsu, Donald Goldfarb
Successive Rank-One Approximations for Nearly Orthogonally Decomposable Symmetric Tensors
null
SIAM Journal on Matrix Analysis and Applications 36.4 (2015): 1638-1659
10.1137/15M1010890
null
cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many idealized problems in signal processing, machine learning and statistics can be reduced to the problem of finding the symmetric canonical decomposition of an underlying symmetric and orthogonally decomposable (SOD) tensor. Drawing inspiration from the matrix case, the successive rank-one approximations (SROA) scheme has been proposed and shown to yield this tensor decomposition exactly, and a plethora of numerical methods have thus been developed for the tensor rank-one approximation problem. In practice, however, the inevitable errors (say) from estimation, computation, and modeling necessitate that the input tensor can only be assumed to be a nearly SOD tensor---i.e., a symmetric tensor slightly perturbed from the underlying SOD tensor. This article shows that even in the presence of perturbation, SROA can still robustly recover the symmetric canonical decomposition of the underlying tensor. It is shown that when the perturbation error is small enough, the approximation errors do not accumulate with the iteration number. Numerical results are presented to support the theoretical findings.
2017-05-31T00:00:00
no_new_dataset
false
0.709825
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10405
Nicolas Le Roux
Cl\'ement Calauz\`enes and Nicolas Le Roux
Distributed SAGA: Maintaining linear convergence rate with limited communication
null
null
null
null
math.OC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, variance-reducing stochastic methods have shown great practical performance, exhibiting linear convergence rate when other stochastic methods offered a sub-linear rate. However, as datasets grow ever bigger and clusters become widespread, the need for fast distribution methods is pressing. We propose here a distribution scheme for SAGA which maintains a linear convergence rate, even when communication between nodes is limited.
2017-05-31T00:00:00
no_new_dataset
false
0.709982
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10407
Gang Wang
Gang Wang and Georgios B. Giannakis and Yousef Saad and Jie Chen
Solving Almost all Systems of Random Quadratic Equations
27 pages, 8 figures
null
null
null
math.OC cs.IT math.IT stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper deals with finding an $n$-dimensional solution $x$ to a system of quadratic equations of the form $y_i=|\langle{a}_i,x\rangle|^2$ for $1\le i \le m$, which is also known as phase retrieval and is NP-hard in general. We put forth a novel procedure for minimizing the amplitude-based least-squares empirical loss, that starts with a weighted maximal correlation initialization obtainable with a few power or Lanczos iterations, followed by successive refinements based upon a sequence of iteratively reweighted (generalized) gradient iterations. The two (both the initialization and gradient flow) stages distinguish themselves from prior contributions by the inclusion of a fresh (re)weighting regularization technique. The overall algorithm is conceptually simple, numerically scalable, and easy-to-implement. For certain random measurement models, the novel procedure is shown capable of finding the true solution $x$ in time proportional to reading the data $\{(a_i;y_i)\}_{1\le i \le m}$. This holds with high probability and without extra assumption on the signal $x$ to be recovered, provided that the number $m$ of equations is some constant $c>0$ times the number $n$ of unknowns in the signal vector, namely, $m>cn$. Empirically, the upshots of this contribution are: i) (almost) $100\%$ perfect signal recovery in the high-dimensional (say e.g., $n\ge 2,000$) regime given only an information-theoretic limit number of noiseless equations, namely, $m=2n-1$ in the real-valued Gaussian case; and, ii) (nearly) optimal statistical accuracy in the presence of additive noise of bounded support. Finally, substantial numerical tests using both synthetic data and real images corroborate markedly improved signal recovery performance and computational efficiency of our novel procedure relative to state-of-the-art approaches.
2017-05-31T00:00:00
no_new_dataset
false
0.709216
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10411
Thomas Chaigne
Thomas Chaigne, Bastien Arnal, Sergey Vilov, Emmanuel Bossy, Ori Katz
Super-resolution photoacoustic imaging via flow induced absorption fluctuations
null
null
null
null
physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In deep tissue photoacoustic imaging the spatial resolution is inherently limited by the acoustic wavelength. We present an approach for surpassing the acoustic diffraction limit by exploiting temporal fluctuations in the sample absorption distribution, such as those induced by flowing particles. In addition to enhanced resolution, our approach inherently provides background reduction, and can be implemented with any conventional photoacoustic imaging system. The considerable resolution increase is made possible by adapting notions from super-resolution optical fluctuations imaging (SOFI) developed for blinking fluorescent molecules, to flowing acoustic emitters. By generalizing SOFI mathematical analysis to complex valued signals, we demonstrate super-resolved photoacoustic images that are free from oscillations caused by band-limited detection. The presented technique holds potential for contrast-agent free micro-vessels imaging, as red blood cells provide a strong endogenous source of naturally fluctuating absorption.
2017-05-31T00:00:00
no_new_dataset
false
0.712335
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10413
Evgeny Zamyatin I
Evgeny Zamyatin, Andrey Filchenkov
Learning to Generate Chairs with Generative Adversarial Nets
Submitted to NIPS 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative adversarial networks (GANs) has gained tremendous popularity lately due to an ability to reinforce quality of its predictive model with generated objects and the quality of the generative model with and supervised feedback. GANs allow to synthesize images with a high degree of realism. However, the learning process of such models is a very complicated optimization problem and certain limitation for such models were found. It affects the choice of certain layers and nonlinearities when designing architectures. In particular, it does not allow to train convolutional GAN models with fully-connected hidden layers. In our work, we propose a modification of the previously described set of rules, as well as new approaches to designing architectures that will allow us to train more powerful GAN models. We show the effectiveness of our methods on the problem of synthesizing projections of 3D objects with the possibility of interpolation by class and view point.
2017-05-31T00:00:00
no_new_dataset
false
0.710546
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10420
Basura Fernando
Basura Fernando and Stephen Gould
Discriminatively Learned Hierarchical Rank Pooling Networks
International Journal of Computer Vision
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we present novel temporal encoding methods for action and activity classification by extending the unsupervised rank pooling temporal encoding method in two ways. First, we present "discriminative rank pooling" in which the shared weights of our video representation and the parameters of the action classifiers are estimated jointly for a given training dataset of labelled vector sequences using a bilevel optimization formulation of the learning problem. When the frame level features vectors are obtained from a convolutional neural network (CNN), we rank pool the network activations and jointly estimate all parameters of the model, including CNN filters and fully-connected weights, in an end-to-end manner which we coined as "end-to-end trainable rank pooled CNN". Importantly, this model can make use of any existing convolutional neural network architecture (e.g., AlexNet or VGG) without modification or introduction of additional parameters. Then, we extend rank pooling to a high capacity video representation, called "hierarchical rank pooling". Hierarchical rank pooling consists of a network of rank pooling functions, which encode temporal semantics over arbitrary long video clips based on rich frame level features. By stacking non-linear feature functions and temporal sub-sequence encoders one on top of the other, we build a high capacity encoding network of the dynamic behaviour of the video. The resulting video representation is a fixed-length feature vector describing the entire video clip that can be used as input to standard machine learning classifiers. We demonstrate our approach on the task of action and activity recognition. Obtained results are comparable to state-of-the-art methods on three important activity recognition benchmarks with classification performance of 76.7% mAP on Hollywood2, 69.4% on HMDB51, and 93.6% on UCF101.
2017-05-31T00:00:00
no_new_dataset
false
0.712987
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10421
David Moss
Marcello Ferrera, Yongwoo Park, Luca Razzari, Brent E. Little, Sai T. Chu, Roberto Morandotti, David J. Moss, and Jose Azana
First and second order all-optical integrating functions in a photonic integrated circuit
9 pages, 5 figures, 27 references
optics express volume 19, issue (23) pages 23153-23161 (2011)
10.1364/OE.19.023153
null
physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We demonstrate all-optical temporal integration of arbitrary optical waveforms with temporal features as short as ~1.9ps. By using a four-port micro-ring resonator based on CMOS compatible doped glass technology we perform the 1st- and 2nd-order cumulative time integral of optical signals over a bandwidth that exceeds 400GHz. This device has applications for a wide range of ultra-fast data processing and pulse shaping functions as well as in the field of optical computing for the real-time analysis of differential equations.
2017-05-31T00:00:00
no_new_dataset
false
0.709621
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10423
David Moss
A.Pasquazi, M.Peccianti, M.Lamont, R.Morandotti, B.E Little, S.Chu and D.J Moss
Parametric gain and wavelength conversion via third order nonlinear optics a CMOS compatible waveguide
8 pages, 4 figures, 30 references
Optics Express volume 18, issue (8) pages 7634-7641 (2010)
10.1364/OE.18.007634
null
physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We demonstrate sub-picosecond wavelength conversion in the C-band via four wave mixing in a 45cm long high index doped silica spiral waveguide. We achieve an on/off conversion efficiency (signal to idler) of +16.5dB as well as a parametric gain of +15dB for a peak pump power of 38W over a wavelength range of 100nm. Furthermore, we demonstrated a minimum gain of +5dB over a wavelength range as large as 200nm.
2017-05-31T00:00:00
no_new_dataset
false
0.709549
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10432
Hamid Mirzaei Buini
Hamid Mirzaei, Tony Givargis
Fine-grained acceleration control for autonomous intersection management using deep reinforcement learning
Accepted in IEEE Smart World Congress 2017
null
null
null
cs.AI cs.RO cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in combining deep learning and Reinforcement Learning have shown a promising path for designing new control agents that can learn optimal policies for challenging control tasks. These new methods address the main limitations of conventional Reinforcement Learning methods such as customized feature engineering and small action/state space dimension requirements. In this paper, we leverage one of the state-of-the-art Reinforcement Learning methods, known as Trust Region Policy Optimization, to tackle intersection management for autonomous vehicles. We show that using this method, we can perform fine-grained acceleration control of autonomous vehicles in a grid street plan to achieve a global design objective.
2017-05-31T00:00:00
no_new_dataset
false
0.707934
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10437
Nikolaos Sahinidis
Nikolaos Ploskas, Christopher Laughman, Arvind U. Raghunathan, Nikolaos V. Sahinidis
Optimization of circuitry arrangements for heat exchangers using derivative-free optimization
null
null
10.1016/j.cherd.2017.05.015
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Optimization of the refrigerant circuitry can improve a heat exchanger's performance. Design engineers currently choose the refrigerant circuitry according to their experience and heat exchanger simulations. However, the design of an optimized refrigerant circuitry is difficult. The number of refrigerant circuitry candidates is enormous. Therefore, exhaustive search algorithms cannot be used and intelligent techniques must be developed to explore the solution space efficiently. In this paper, we formulate refrigerant circuitry design as a binary constrained optimization problem. We use CoilDesigner, a simulation and design tool of air to refrigerant heat exchangers, in order to simulate the performance of different refrigerant circuitry designs. We treat CoilDesigner as a black-box system since the exact relationship of the objective function with the decision variables is not explicit. Derivative-free optimization (DFO) algorithms are suitable for solving this black-box model since they do not require explicit functional representations of the objective function and the constraints. The aim of this paper is twofold. First, we compare four mixed-integer constrained DFO solvers and one box-bounded DFO solver and evaluate their ability to solve a difficult industrially relevant problem. Second, we demonstrate that the proposed formulation is suitable for optimizing the circuitry configuration of heat exchangers. We apply the DFO solvers to 17 heat exchanger design problems. Results show that TOMLAB/glcDirect and TOMLAB/glcSolve can find optimal or near-optimal refrigerant circuitry designs after a relatively small number of circuit simulations.
2017-05-31T00:00:00
no_new_dataset
false
0.708629
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10439
Oliver Knill
Oliver Knill
On a Dehn-Sommerville functional for simplicial complexes
24 pages, 10 figures
null
null
null
math.CO cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Assume G is a finite abstract simplicial complex with f-vector (v0,v1, ...), and generating function f(x) = sum(k=1 v(k-1) x^k = v0 x + v1 x^2+ v2 x^3 + ..., the Euler characteristic of G can be written as chi(G)=f(0)-f(-1). We study here the functional f1'(0)-f1'(-1), where f1' is the derivative of the generating function f1 of G1. The Barycentric refinement G1 of G is the Whitney complex of the finite simple graph for which the faces of G are the vertices and where two faces are connected if one is a subset of the other. Let L is the connection Laplacian of G, which is L=1+A, where A is the adjacency matrix of the connection graph G', which has the same vertex set than G1 but where two faces are connected they intersect. We have f1'(0)=tr(L) and for the Green function g L^(-1) also f1'(-1)=tr(g) so that eta1(G) = f1'(0)-f1'(-1) is equal to eta(G)=tr(L-L^(-1). The established formula tr(g)=f1'(-1) for the generating function of G1 complements the determinant expression det(L)=det(g)=zeta(-1) for the Bowen-Lanford zeta function zeta(z)=1/det(1-z A) of the connection graph G' of G. We also establish a Gauss-Bonnet formula eta1(G) = sum(x in V(G1) chi(S(x)), where S(x) is the unit sphere of x the graph generated by all vertices in G1 directly connected to x. Finally, we point out that the functional eta0(G) = sum(x in V(G) chi(S(x)) on graphs takes arbitrary small and arbitrary large values on every homotopy type of graphs.
2017-05-31T00:00:00
no_new_dataset
false
0.709959
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10443
Victor Silva
Victor do Nascimento Silva and Luiz Chaimowicz
MOBA: a New Arena for Game AI
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Games have always been popular testbeds for Artificial Intelligence (AI). In the last decade, we have seen the rise of the Multiple Online Battle Arena (MOBA) games, which are the most played games nowadays. In spite of this, there are few works that explore MOBA as a testbed for AI Research. In this paper we present and discuss the main features and opportunities offered by MOBA games to Game AI Research. We describe the various challenges faced along the game and also propose a discrete model that can be used to better understand and explore the game. With this, we aim to encourage the use of MOBA as a novel research platform for Game AI.
2017-05-31T00:00:00
no_new_dataset
false
0.713548
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10447
Jimmy Ren
Jimmy Ren, Zhiyang Yu, Jianbo Liu, Rui Zhang, Wenxiu Sun, Jiahao Pang, Xiaohao Chen, Qiong Yan
Robust Tracking Using Region Proposal Networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in visual tracking showed that deep Convolutional Neural Networks (CNN) trained for image classification can be strong feature extractors for discriminative trackers. However, due to the drastic difference between image classification and tracking, extra treatments such as model ensemble and feature engineering must be carried out to bridge the two domains. Such procedures are either time consuming or hard to generalize well across datasets. In this paper we discovered that the internal structure of Region Proposal Network (RPN)'s top layer feature can be utilized for robust visual tracking. We showed that such property has to be unleashed by a novel loss function which simultaneously considers classification accuracy and bounding box quality. Without ensemble and any extra treatment on feature maps, our proposed method achieved state-of-the-art results on several large scale benchmarks including OTB50, OTB100 and VOT2016. We will make our code publicly available.
2017-05-31T00:00:00
no_new_dataset
false
0.709961
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10449
Yaohang Li
Hao Ji, Michael Mascagni, Yaohang Li
Gaussian Variant of Freivalds' Algorithm for Efficient and Reliable Matrix Product Verification
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article, we consider the general problem of checking the correctness of matrix multiplication. Given three $n \times n$ matrices $A$, $B$, and $C$, the goal is to verify that $A \times B=C$ without carrying out the computationally costly operations of matrix multiplication and comparing the product $A \times B$ with $C$, term by term. This is especially important when some or all of these matrices are very large, and when the computing environment is prone to soft errors. Here we extend Freivalds' algorithm to a Gaussian Variant of Freivalds' Algorithm (GVFA) by projecting the product $A \times B$ as well as $C$ onto a Gaussian random vector and then comparing the resulting vectors. The computational complexity of GVFA is consistent with that of Freivalds' algorithm, which is $O(n^{2})$. However, unlike Freivalds' algorithm, whose probability of a false positive is $2^{-k}$, where $k$ is the number of iterations. Our theoretical analysis shows that when $A \times B \neq C$, GVFA produces a false positive on set of inputs of measure zero with exact arithmetic. When we introduce round-off error and floating point arithmetic into our analysis, we can show that the larger this error, the higher the probability that GVFA avoids false positives. Moreover, by iterating GVFA $k$ times, the probability of a false positive decreases as $p^k$, where $p$ is a very small value depending on the nature of the fault on the result matrix and the arithmetic system's floating-point precision. Unlike deterministic algorithms, there do not exist any fault patterns that are completely undetectable with GVFA. Thus GVFA can be used to provide efficient fault tolerance in numerical linear algebra, and it can be efficiently implemented on modern computing architectures. In particular, GVFA can be very efficiently implemented on architectures with hardware support for fused multiply-add operations.
2017-05-31T00:00:00
no_new_dataset
false
0.70723
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10453
Hamidreza Alvari
Hamidreza Alvari
Twitter Hashtag Recommendation using Matrix Factorization
null
null
null
null
cs.SI cs.IR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Twitter, one of the biggest and most popular microblogging Websites, has evolved into a powerful communication platform which allows millions of active users to generate huge volume of microposts and queries on a daily basis. To accommodate effective categorization and easy search, users are allowed to make use of hashtags, keywords or phrases prefixed by hash character, to categorize and summarize their posts. However, valid hashtags are not restricted and thus are created in a free and heterogeneous style, increasing difficulty of the task of tweet categorization. In this paper, we propose a low-rank weighted matrix factorization based method to recommend hashtags to the users solely based on their hashtag usage history and independent from their tweets' contents. We confirm using two-sample t-test that users are more likely to adopt new hashtags similar to the ones they have previously adopted. In particular, we formulate the problem of hashtag recommendation into an optimization problem and incorporate hashtag correlation weight matrix into it to account for the similarity between different hashtags. We finally leverage widely used matrix factorization from recommender systems to solve the optimization problem by capturing the latent factors of users and hashtags. Empirical experiments demonstrate that our method is capable to properly recommend hashtags.
2017-05-31T00:00:00
no_new_dataset
false
0.711839
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10455
Hamidreza Alvari
Hamidreza Alvari
Exploiting Consistency Theory for Modeling Twitter Hashtag Adoption
null
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Twitter, a microblogging service, has evolved into a powerful communication platform with millions of active users who generate immense volume of microposts on a daily basis. To facilitate effective categorization and easy search, users adopt hashtags, keywords or phrases preceded by hash (#) character. Successful prediction of the spread and propagation of information in the form of trending topics or hashtags in Twitter, could help real time identification of new trends and thus improve marketing efforts. Social theories such as consistency theory suggest that people prefer harmony or consistency in their thoughts. In Twitter, for example, users are more likely to adopt the same trending hashtag multiple times before it eventually dies. In this paper, we propose a low-rank weighted matrix factorization approach to model trending hashtag adoption in Twitter based on consistency theory. In particular, we first cast the problem of modeling trending hashtag adoption into an optimization problem, then integrate consistency theory into it as a regularization term and finally leverage widely used matrix factorization to solve the optimization. Empirical experiments demonstrate that our method outperforms other baselines in predicting whether a specific trending hashtag will be used by users in future.
2017-05-31T00:00:00
no_new_dataset
false
0.712567
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10459
Itsik Bergel
Avi Zanko, Itsik Bergel and Amir Leshem
Deep-LMS for gigabit transmission over unshielded twisted pair cables
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a rapidly converging LMS algorithm for crosstalk cancellation. The architecture is similar to deep neural networks, where multiple layers are adapted sequentially. The application motivating this approach is gigabit rate transmission over unshielded twisted pairs using a vectored system. The crosstalk cancellation algorithm uses an adaptive non-diagonal preprocessing matrix prior to a conventional LMS crosstalk canceler. The update of the preprocessing matrix is inspired by deep neural networks. However, since most the operations in the Deep-LMS algorithm are linear, we are capable of providing an exact convergence speed analysis. The role of the preprocessing matrix is to speed up the convergence of the conventional LMS crosstalk canceler and hence the convergence of the overall system. The Deep-LMS is important for crosstalk cancellation in the novel G.fast standard, where traditional LMS converges very slowly due to the ill-conditioned covariance matrix of the received signal at the extended bandwidth. Simulation results support our analysis and show significant reduction in convergence time compared to existing LMS variants.
2017-05-31T00:00:00
no_new_dataset
false
0.709225
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10475
Brynle Barrett
G. Lefevre, G. Condon, I. Riou, L. Chichet, M. Essayeh, M. Rabault, L. Antoni-Micollier, N. Mielec, D. Holleville, L. Amand, R. Geiger, A. Landragin, M. Prevedelli, B. Barrett, B. Battelier, A. Bertoldi, B. Canuel, P. Bouyer
Studies of general relativity with quantum sensors
11 pages, 7 figures, to appear in "Proceedings of the 52nd Rencontres de Moriond on Gravitation"
null
null
null
physics.atom-ph gr-qc quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present two projects aiming to probe key aspects of the theory of General Relativity with high-precision quantum sensors. These projects use cold-atom interferometry with the aim of measuring gravitational waves and testing the equivalence principle. To detect gravitational waves, a large multi-sensor demonstrator is currently under construction that will exploit correlations between three atom interferometers spread along a 200 m optical cavity. Similarly, a test of the weak equivalence principle is currently underway using a compact and mobile dual-species interferometer, which will serve as a prototype for future high-precision tests onboard an orbiting satellite. We present recent results and improvements related to both projects.
2017-05-31T00:00:00
no_new_dataset
false
0.699383
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10494
Baruch Epstein
Baruch Epstein. Ron Meir, Tomer Michaeli
Joint auto-encoders: a flexible multi-task learning framework
null
null
null
null
stat.ML cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The incorporation of prior knowledge into learning is essential in achieving good performance based on small noisy samples. Such knowledge is often incorporated through the availability of related data arising from domains and tasks similar to the one of current interest. Ideally one would like to allow both the data for the current task and for previous related tasks to self-organize the learning system in such a way that commonalities and differences between the tasks are learned in a data-driven fashion. We develop a framework for learning multiple tasks simultaneously, based on sharing features that are common to all tasks, achieved through the use of a modular deep feedforward neural network consisting of shared branches, dealing with the common features of all tasks, and private branches, learning the specific unique aspects of each task. Once an appropriate weight sharing architecture has been established, learning takes place through standard algorithms for feedforward networks, e.g., stochastic gradient descent and its variations. The method deals with domain adaptation and multi-task learning in a unified fashion, and can easily deal with data arising from different types of sources. Numerical experiments demonstrate the effectiveness of learning in domain adaptation and transfer learning setups, and provide evidence for the flexible and task-oriented representations arising in the network.
2017-05-31T00:00:00
no_new_dataset
false
0.711007
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10496
Christian Schmidt
Christian Schmidt and Eleanor Dunn and Madeleine Lowery and Ursula van Rienen
Uncertainty Quantification of Oscillation Suppression during DBS in a Coupled Finite Element and Network Model
10 pages
null
10.1109/TNSRE.2016.2608925
null
q-bio.NC cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Models of the cortico-basal ganglia network and volume conductor models of the brain can provide insight into the mechanisms of action of deep brain stimulation (DBS). In this study, the coupling of a network model, under parkinsonian conditions, to the extracellular field distribution obtained from a three dimensional finite element model of a rodent's brain during DBS is presented. This coupled model is used to investigate the influence of uncertainty in the electrical properties of brain tissue and encapsulation tissue, formed around the electrode after implantation, on the suppression of oscillatory neural activity during DBS. The resulting uncertainty in this effect of DBS on the network activity is quantified using a computationally efficient and non-intrusive stochastic approach based on the generalized Polynomial Chaos. The results suggest that variations in the electrical properties of brain tissue may have a substantial influence on the level of suppression of oscillatory activity during DBS. Applying a global sensitivity analysis on the suppression of the simulated oscillatory activity showed that the influence of uncertainty in the electrical properties of the encapsulation tissue had only a minor influence, in agreement with previous experimental and computational studies investigating the mechanisms of current-controlled DBS in the literature.
2017-05-31T00:00:00
no_new_dataset
false
0.708052
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10503
Marko Jankovic
Marko V. Jankovic
Quantum Low Entropy based Associative Reasoning or QLEAR Learning
null
null
null
null
cs.LG cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose the classification method based on a learning paradigm we are going to call Quantum Low Entropy based Associative Reasoning or QLEAR learning. The approach is based on the idea that classification can be understood as supervised clustering, where a quantum entropy in the context of the quantum probabilistic model, will be used as a "capturer" (measure, or external index), of the "natural structure" of the data. By using quantum entropy we do not make any assumption about linear separability of the data that are going to be classified. The basic idea is to find close neighbors to a query sample and then use relative change in the quantum entropy as a measure of similarity of the newly arrived sample with the representatives of interest. In other words, method is based on calculation of quantum entropy of the referent system and its relative change with the addition of the newly arrived sample. Referent system consists of vectors that represent individual classes and that are the most similar, in Euclidean distance sense, to the vector that is analyzed. Here, we analyze the classification problem in the context of measuring similarities to prototype examples of categories. While nearest neighbor classifiers are natural in this setting, they suffer from the problem of high variance (in bias-variance decomposition) in the case of limited sampling. Alternatively, one could use machine learning techniques (like support vector machines) but they involve time-consuming optimization. Here we propose a hybrid of nearest neighbor and machine learning technique which deals naturally with the multi-class setting, has reasonable computational complexity both in training and at run time, and yields excellent results in practice.
2017-05-31T00:00:00
no_new_dataset
false
0.712552
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10504
Daniel Errandonea
A. Benmakhlouf, D. Errandonea, M. Bouchenafa, S. Maabed, A. Bouhemadou, A. Bentabet
New Pressure-Induced Polymorphic Transitions of Anhydrous Magnesium Sulfate
35 paginas, 9 figures, Table 9
Dalton Trans. 46, 5058 - 5068 (2017)
10.1039/c7dt00539c
null
cond-mat.mtrl-sci physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The effects of pressure on the crystal structure of the three known polymorphs of magnesium sulfate have been theoretically study by means of DFT calculations up to 45 GPa. We determined that at ambient conditions gamma MgSO4 is an unstable polymorph, which decompose into MgO and SO3, and that the response of the other two polymorphs to hydrostatic pressure is non isotropic. Additionally we found that at all pressures beta MgSO4 has a largest enthalpy than alpha MgSO4. This indicates that beta MgSO4 is thermodynamically unstable versus alpha MgSO4 and predicts the occurrence of a beta alpha phase transition under moderate compression. Our calculations also predict the existence under pressure of additional phase transitions to two new polymorphs of MgSO4, which we named as delta MgSO4 and epsilon MgSO4. The alpha delta transition is predicted to occur at 17.5 GPa, and the delta epsilon transition at 35 GPa, pressures that nowadays can be experimentally easily achieved. All the predicted structural transforma ions are characterized as first order transitions. This suggests that they can be non reversible, and therefore the new polymorphs could be recovered as metastable polymorphs at ambient conditions. The crystal structure of the two new polymorphs is reported. In them, the coordination number of sulfur is four as in the previously known polymorphs, but the coordination number of magnesium is eight instead of six. In the article we will report the axial and bond compressibility for the four polymorphs of MgSO4. The pressure volume equation of state of each phase is also given. The values obtained for the bulk modulus are 62 GPa, 57 GPa, 102 GPa, and 119 GPa for alpha MgSO4, beta MgSO4, delta MgSO4, and epsilon MgSO4, respectively. Finally, the electronic band structure of these four polymorphs of MgSO4 has been calculated by the first time.
2017-05-31T00:00:00
no_new_dataset
false
0.712773
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10507
Daniel Errandonea
David Santamaria Perez, Tomas Marqueno, Simon MacLeod, Javier Ruiz Fuertes, Dominik Daisenberger, Raquel Chulia Jordan, Daniel Errandonea, Jose Luis Jorda, Fernando Rey, Chris McGuire, Adam Mahkluf, Abby Kavner, Catalin Popescu
Structural evolution of CO2 filled pure silica LTA zeolite under high-pressure high-temperature conditions
29 pages, 9 figures, 5 tables
Chem. Mater. 29, 4502 - 4510 (2017)
10.1021/acs.chemmater.7b01158
null
cond-mat.mtrl-sci physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The crystal structure of CO2 filled pure SiO2 LTA zeolite has been studied at high pressures and temperatures using synchrotron based x ray powder diffraction. Its structure consists of 13 CO2 guest molecules, 12 of them accommodated in the large alpha cages and 1 in the beta cages, giving a SiO2:CO2 stoichiometric ratio smaller than 2. The structure remains stable under pressure up to 20 GPa with a slight pressure dependent rhombohedral distortion, indicating that pressure induced amorphization is prevented by the insertion of guest species in this open framework. The ambient-temperature lattice compressibility has been determined. In situ high pressure resistive heating experiments up to 750 K allow us to estimate the thermal expansivity at 5 GPa. Our data confirm that the insertion of CO2 reverses the negative thermal expansion of the empty zeolite structure. No evidence of any chemical reaction was observed. The possibility of synthesizing a silicon carbonate at high temperatures and higher pressures is discussed in terms of the evolution of C-O and Si-O distances between molecular and framework atoms.
2017-05-31T00:00:00
no_new_dataset
false
0.709018
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10520
Laszlo Csirmaz
Laszlo Csirmaz and Peter Ligeti
Secret sharing on large girth graphs
null
null
null
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate graph based secret sharing schemes and its information ratio, also called complexity, measuring the maximal amount of information the vertices has to store. It was conjectured that in large girth graphs, where the interaction between far away nodes is restricted to a single path, this ratio is bounded. This conjecture was supported by several result, most notably by a result of Csirmaz and Ligeti saying that the complexity of graphs with girth at least six and no neighboring high degree vertices is strictly below 2. In this paper we refute the above conjecture. First, a family of $d$-regular graphs is defined iteratively such that the complexity of these graphs is the largest possible $(d+1)/2$ allowed by Stinson's bound. This part extends earlier results of van Dijk and Blundo et al, and uses the so-called entropy method. Second, using combinatorial arguments, we show that this family contains graphs with arbitrary large girth. In particular, we obtain the following purely combinatorial result, which might be interesting on its own: there are $d$-regular graphs with arbitrary large girth such that any fractional edge-cover by stars (or by complete multipartite graphs) must cover some vertex $(d+1)/2$ times.
2017-05-31T00:00:00
no_new_dataset
false
0.711226
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10524
Zhanyu Ma
Zhanyu Ma, Jing-Hao Xue, Arne Leijon, Zheng-Hua Tan, Zhen Yang, and Jun Guo
Decorrelation of Neutral Vector Variables: Theory and Applications
null
null
null
null
cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose novel strategies for neutral vector variable decorrelation. Two fundamental invertible transformations, namely serial nonlinear transformation and parallel nonlinear transformation, are proposed to carry out the decorrelation. For a neutral vector variable, which is not multivariate Gaussian distributed, the conventional principal component analysis (PCA) cannot yield mutually independent scalar variables. With the two proposed transformations, a highly negatively correlated neutral vector can be transformed to a set of mutually independent scalar variables with the same degrees of freedom. We also evaluate the decorrelation performances for the vectors generated from a single Dirichlet distribution and a mixture of Dirichlet distributions. The mutual independence is verified with the distance correlation measurement. The advantages of the proposed decorrelation strategies are intensively studied and demonstrated with synthesized data and practical application evaluations.
2017-05-31T00:00:00
no_new_dataset
false
0.71057
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10528
Joshua Achiam
Joshua Achiam, David Held, Aviv Tamar, Pieter Abbeel
Constrained Policy Optimization
Accepted to ICML 2017
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For many applications of reinforcement learning it can be more convenient to specify both a reward function and constraints, rather than trying to design behavior through the reward function. For example, systems that physically interact with or around humans should satisfy safety constraints. Recent advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015, Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in high-dimensional control, but do not consider the constrained setting. We propose Constrained Policy Optimization (CPO), the first general-purpose policy search algorithm for constrained reinforcement learning with guarantees for near-constraint satisfaction at each iteration. Our method allows us to train neural network policies for high-dimensional control while making guarantees about policy behavior all throughout training. Our guarantees are based on a new theoretical result, which is of independent interest: we prove a bound relating the expected returns of two policies to an average divergence between them. We demonstrate the effectiveness of our approach on simulated robot locomotion tasks where the agent must satisfy constraints motivated by safety.
2017-05-31T00:00:00
no_new_dataset
false
0.709488
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10529
Scientific Information Service CERN
P. Gibbon
Introduction to Plasma Physics
15 pages, contribution to the CAS - CERN Accelerator School: Plasma Wake Acceleration, CERN, Geneva, Switzerland, 23 - 29 Nov 2014
CERN Yellow Report CERN-2016-001, pp. 51-65
10.5170/CERN-2016-001.51
null
physics.acc-ph
http://creativecommons.org/licenses/by/4.0/
These notes are intended to provide a brief primer in plasma physics, introducing common definitions, basic properties, and typical processes found in plasmas. These concepts are inherent in contemporary plasma-based accelerator schemes, and thus provide a foundation for the more advanced expositions that follow in this volume. No prior knowledge of plasma physics is required, but the reader is assumed to be familiar with basic electrodynamics and fluid mechanics.
2017-05-31T00:00:00
no_new_dataset
false
0.714236
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10534
Scientific Information Service CERN
Z. Najmudin
Laser Wakefield Accelerators
10 pages, contribution to the CAS - CERN Accelerator School: Plasma Wake Acceleration, CERN, Geneva, Switzerland, 23 - 29 Nov 2014
CERN Yellow Report CERN 2016-001, pp.109-118
10.5170/CERN-2016-001.109
null
physics.acc-ph physics.plasm-ph
http://creativecommons.org/licenses/by/4.0/
The one-dimensional wakefield generation equations are solved for increasing levels of non-linearity, to demonstrate how they contribute to the overall behaviour of a non-linear wakefield in a plasma. The effect of laser guiding is also studied as a way to increase the interaction length of a laser wakefield accelerator.
2017-05-31T00:00:00
no_new_dataset
false
0.70964
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10535
Scientific Information Service CERN
R. Bingham and R. Trines
Introduction to Plasma Accelerators: the Basics
11 pages, Contribution to the CAS - CERN Accelerator School: Plasma Wake Acceleration, CERN, Geneva, Switzerland, 23 - 29 Nov 2014
CERN Yellow Report CERN 2016-001, pp. 67-77
10.5170/CERN-2016-001.67
null
physics.acc-ph physics.plasm-ph
http://creativecommons.org/licenses/by/4.0/
In this article, we concentrate on the basic physics of relativistic plasma wave accelerators. The generation of relativistic plasma waves by intense lasers or electron beams in low-density plasmas is important in the quest for producing ultra-high acceleration gradients for accelerators. A number of methods are being pursued vigorously to achieve ultra-high acceleration gradients using various plasma wave drivers; these include wakefield accelerators driven by photon, electron, and ion beams. We describe the basic equations and show how intense beams can generate a large-amplitude relativistic plasma wave capable of accelerating particles to high energies. We also demonstrate how these same relativistic electron waves can accelerate photons in plasmas.
2017-05-31T00:00:00
no_new_dataset
false
0.71278
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10537
Scientific Information Service CERN
P. Muggli
Beam-driven, Plasma-based Particle Accelerators
24 pages, contribution to the CAS - CERN Accelerator School: Plasma Wake Acceleration, CERN, Geneva, Switzerland, 23 - 29 Nov 2014
CERN Yellow Report CERN 2016-001, pp.119-142
10.5170/CERN-2016-001.119
null
physics.acc-ph physics.plasm-ph
http://creativecommons.org/licenses/by/4.0/
We briefly give some of the characteristics of the beam-driven, plasma-based particle accelerator known as the plasma wakefield accelerator (PWFA). We also mention some of the major results that have been obtained since the birth of the concept. We focus on high-energy particle beams where possible.
2017-05-31T00:00:00
no_new_dataset
false
0.715012
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10542
Scientific Information Service CERN
J. Faure
Plasma Injection Schemes for Laser-Plasma Accelerators
15 pages, contribution to the CAS - CERN Accelerator School: Plasma Wake Acceleration, CERN, Geneva, Switzerland, 23 - 29 Nov 2014
CERN Yellow Report CERN 2016-001, pp.143-157
10.5170/CERN-2016-001.143
null
physics.acc-ph physics.plasm-ph
http://creativecommons.org/licenses/by/4.0/
Plasma injection schemes are crucial for producing high-quality electron beams in laser-plasma accelerators. This article introduces the general concepts of plasma injection. First, a Hamiltonian model for particle trapping and acceleration in plasma waves is introduced; ionization injection and colliding-pulse injection are described in the framework of this Hamiltonian model. We then proceed to consider injection in plasma density gradients.
2017-05-31T00:00:00
no_new_dataset
false
0.712835
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10546
Hamed R. Tavakoli
Hamed R. Tavakoli, Fawad Ahmed, Ali Borji, Jorma Laaksonen
Saliency Revisited: Analysis of Mouse Movements versus Fixations
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper revisits visual saliency prediction by evaluating the recent advancements in this field such as crowd-sourced mouse tracking-based databases and contextual annotations. We pursue a critical and quantitative approach towards some of the new challenges including the quality of mouse tracking versus eye tracking for model training and evaluation. We extend quantitative evaluation of models in order to incorporate contextual information by proposing an evaluation methodology that allows accounting for contextual factors such as text, faces, and object attributes. The proposed contextual evaluation scheme facilitates detailed analysis of models and helps identify their pros and cons. Through several experiments, we find that (1) mouse tracking data has lower inter-participant visual congruency and higher dispersion, compared to the eye tracking data, (2) mouse tracking data does not totally agree with eye tracking in general and in terms of different contextual regions in specific, and (3) mouse tracking data leads to acceptable results in training current existing models, and (4) mouse tracking data is less reliable for model selection and evaluation. The contextual evaluation also reveals that, among the studied models, there is no single model that performs best on all the tested annotations.
2017-05-31T00:00:00
no_new_dataset
false
0.710621
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10552
Longquan Dai
Longquan Dai
Interpreting and Extending The Guided Filter Via Cyclic Coordinate Descent
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we will disclose that the Guided Filter (GF) can be interpreted as the Cyclic Coordinate Descent (CCD) solver of a Least Square (LS) objective function. This discovery implies a possible way to extend GF because we can alter the objective function of GF and define new filters as the first pass iteration of the CCD solver of modified objective functions. Moreover, referring to the iterative minimizing procedure of CCD, we can derive new rolling filtering schemes. Hence, under the guidance of this discovery, we not only propose new GF-like filters adapting to the specific requirements of applications but also offer thoroughly explanations for two rolling filtering schemes of GF as well as the way to extend them. Experiments show that our new filters and extensions produce state-of-the-art results.
2017-05-31T00:00:00
no_new_dataset
false
0.711905
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10557
John Aslanides
John Aslanides, Jan Leike, Marcus Hutter
Universal Reinforcement Learning Algorithms: Survey and Experiments
8 pages, 6 figures, Twenty-sixth International Joint Conference on Artificial Intelligence (IJCAI-17)
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Many state-of-the-art reinforcement learning (RL) algorithms typically assume that the environment is an ergodic Markov Decision Process (MDP). In contrast, the field of universal reinforcement learning (URL) is concerned with algorithms that make as few assumptions as possible about the environment. The universal Bayesian agent AIXI and a family of related URL algorithms have been developed in this setting. While numerous theoretical optimality results have been proven for these agents, there has been no empirical investigation of their behavior to date. We present a short and accessible survey of these URL algorithms under a unified notation and framework, along with results of some experiments that qualitatively illustrate some properties of the resulting policies, and their relative performance on partially-observable gridworld environments. We also present an open-source reference implementation of the algorithms which we hope will facilitate further understanding of, and experimentation with, these ideas.
2017-05-31T00:00:00
no_new_dataset
false
0.711575
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10564
Scientific Information Service CERN
M. Ferrario
Injection, Extraction and Matching
21 pages, contribution to the CAS - CERN Accelerator School: Plasma Wake Acceleration, CERN, Geneva, Switzerland, 23 - 29 Nov 2014
CERN Yellow Report CERN 2016-001, pp.159-179
10.5170/CERN-2016-001.159
null
physics.acc-ph
http://creativecommons.org/licenses/by/4.0/
In this lecture we introduce from basic principles the main concepts of beam focusing and transport in modern accelerators using the beam envelope equationas a convenient mathematical tool. Matching conditions suitable for preserving beam quality are derived from the model for significant beam dynamics regimes. An extension of the model to the case of plasma accelerators is introduced. The understanding of similarities and differences with respect to traditionalaccelerators is also emphasized.
2017-05-31T00:00:00
no_new_dataset
false
0.711376
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10566
Scientific Information Service CERN
B. Cros
Laser-driven Plasma Wakefield: Propagation Effects
24 pages, contribution to the CAS - CERN Accelerator School: Plasma Wake Acceleration, CERN, Geneva, Switzerland, 23 - 29 Nov 2014
CERN Yellow Report CERN 2016-001, pp.207-230
10.5170/CERN-2016-001.207
null
physics.acc-ph physics.plasm-ph
http://creativecommons.org/licenses/by/4.0/
In the frame of laser-driven wakefield acceleration, the main characteristics oflaser propagation and plasma wave excitation are described, with an emphasis onthe role of propagation distance for electron acceleration. To optimizeinteraction length and maximize energy gain, operation at low plasma density isthe most promising regime for achieving ultra-relativistic energies. Among thepossible methods of extending propagation length at low plasma density, laserguiding by grazing incidence reflection at the wall of dielectric capillarytubes has several assets. The properties of laser guiding and the measurement ofplasma waves over long distances are presented.
2017-05-31T00:00:00
no_new_dataset
false
0.711112
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10569
Scientific Information Service CERN
M. Roth and M. Schollmeier
Ion Acceleration - Target Normal Sheath Acceleration
40 pages, contribution to the CAS - CERN Accelerator School: Plasma Wake Acceleration, CERN, Geneva, Switzerland, 23 - 29 Nov 2014
CERN Yellow Report CERN 2016-001, pp. 231-270
10.5170/CERN-2016-001.231
null
physics.acc-ph physics.plasm-ph
http://creativecommons.org/licenses/by/4.0/
Energetic ions have been observed since the very first laser-plasma experiments.Their origin was found to be the charge separation of electrons heated by thelaser, which transfers energy to the ions accelerated in the field. The adventof ultra-intense lasers with pulse lengths in the femtosecond regime resulted inthe discovery of very energetic ions with characteristics quite different fromthose driven by long-pulse lasers. Discovered in the late 1990s, these ion beamshave become the focus of intense research worldwide, because of their uniqueproperties and high particle numbers. Based on their non-isotropic, beam-likebehaviour, which is always perpendicular to the emitting surface, theacceleration mechanism is called target normal sheath acceleration (TNSA). Weaddress the physics of the mechanism and its dependence on laser and targetparameters. Techniques to explore and diagnose the beams, to make them usefulfor applications, are also addressed.
2017-05-31T00:00:00
no_new_dataset
false
0.714003
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10573
Scientific Information Service CERN
E. Gschwendtner
AWAKE, A Particle-driven Plasma Wakefield Acceleration Experiment
18 pages, contribution to the CAS - CERN Accelerator School: Plasma Wake Acceleration, CERN, Geneva, Switzerland, 23 - 29 Nov 2014
CERN Yellow Report CERN 2016-001, pp.271-288
10.5170/CERN-2016-001.271
null
physics.acc-ph
http://creativecommons.org/licenses/by/4.0/
The Advanced Proton Driven Plasma Wakefield Acceleration Experiment (AWAKE) aims at studying plasma wakefield generation and electron acceleration driven by proton bunches. It is a proof-of-principle R&D experiment at CERN and the world's first proton driven plasma wakefield acceleration experiment. The AWAKE experiment will be installed in the former CNGS facility and uses the 400 GeV/c proton beam bunches from the SPS. The first experiments will focus on the self-modulation instability of the long (r.m.s ~12 cm) proton bunch in the plasma. These experiments are planned for the end of 2016. Later, in 2017/2018, low energy (~15 MeV) electrons will be externally injected to sample the wakefields and be accelerated beyond 1 GeV.
2017-05-31T00:00:00
no_new_dataset
false
0.708956
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10579
Stefano Dafarra
Stefano Dafarra, Francesco Romano and Francesco Nori
Torque-Controlled Stepping-Strategy Push Recovery: Design and Implementation on the iCub Humanoid Robot
null
null
10.1109/HUMANOIDS.2016.7803271
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the challenges for the robotics community is to deploy robots which can reliably operate in real world scenarios together with humans. A crucial requirement for legged robots is the capability to properly balance on their feet, rejecting external disturbances. iCub is a state-of-the-art humanoid robot which has only recently started to balance on its feet. While the current balancing controller has proved successful in various scenarios, it still misses the capability to properly react to strong pushes by taking steps. This paper goes in this direction. It proposes and implements a control strategy based on the Capture Point concept [1]. Instead of relying on position control, like most of Capture Point related approaches, the proposed strategy generates references for the momentum-based torque controller already implemented on the iCub, thus extending its capabilities to react to external disturbances, while retaining the advantages of torque control when interacting with the environment. Experiments in the Gazebo simulator and on the iCub humanoid robot validate the proposed strategy.
2017-05-31T00:00:00
no_new_dataset
false
0.706458
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10583
Soumyabrata Dev
Soumyabrata Dev, Florian M. Savoy, Yee Hui Lee, Stefan Winkler
Nighttime sky/cloud image segmentation
Accepted in Proc. IEEE International Conference on Image Processing (ICIP), 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Imaging the atmosphere using ground-based sky cameras is a popular approach to study various atmospheric phenomena. However, it usually focuses on the daytime. Nighttime sky/cloud images are darker and noisier, and thus harder to analyze. An accurate segmentation of sky/cloud images is already challenging because of the clouds' non-rigid structure and size, and the lower and less stable illumination of the night sky increases the difficulty. Nonetheless, nighttime cloud imaging is essential in certain applications, such as continuous weather analysis and satellite communication. In this paper, we propose a superpixel-based method to segment nighttime sky/cloud images. We also release the first nighttime sky/cloud image segmentation database to the research community. The experimental results show the efficacy of our proposed algorithm for nighttime images.
2017-05-31T00:00:00
new_dataset
true
0.702475
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10586
Zhenzhou Wu
Zhenzhou Wu and Xin Zheng and Daniel Dahlmeier
Character-Based Text Classification using Top Down Semantic Model for Sentence Representation
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Despite the success of deep learning on many fronts especially image and speech, its application in text classification often is still not as good as a simple linear SVM on n-gram TF-IDF representation especially for smaller datasets. Deep learning tends to emphasize on sentence level semantics when learning a representation with models like recurrent neural network or recursive neural network, however from the success of TF-IDF representation, it seems a bag-of-words type of representation has its strength. Taking advantage of both representions, we present a model known as TDSM (Top Down Semantic Model) for extracting a sentence representation that considers both the word-level semantics by linearly combining the words with attention weights and the sentence-level semantics with BiLSTM and use it on text classification. We apply the model on characters and our results show that our model is better than all the other character-based and word-based convolutional neural network models by \cite{zhang15} across seven different datasets with only 1\% of their parameters. We also demonstrate that this model beats traditional linear models on TF-IDF vectors on small and polished datasets like news article in which typically deep learning models surrender.
2017-05-31T00:00:00
no_new_dataset
false
0.710234
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10588
Scientific Information Service CERN
S.P.D. Mangles
An Overview of Recent Progress in Laser Wakefield Acceleration Experiments
12 pages, contribution to the CAS - CERN Accelerator School: Plasma Wake Acceleration, CERN, Geneva, Switzerland, 23 - 29 Nov 2014
CERN Yellow Report CERN 2016-001, pp.289-300
10.5170/CERN-2016-001.289
null
physics.acc-ph physics.plasm-ph
http://creativecommons.org/licenses/by/4.0/
The goal of this paper is to examine experimental progress in laser wakefield acceleration over the past decade (2004-2014), and to use trends in the data to understand some of the important physical processes. By examining a set of over 50 experiments, various trends concerning the relationship between plasma density, accelerator length, laser power and the final electron beam en- ergy are revealed. The data suggest that current experiments are limited by dephasing and that current experiments typically require some pulse evolution to reach the trapping threshold.
2017-05-31T00:00:00
no_new_dataset
false
0.713619
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10589
James P. Sethna
Lorien X. Hayden, Alexander A. Alemi, Paul H. Ginsparg, James P. Sethna
Jeffrey's prior sampling of deep sigmoidal networks
null
null
null
null
cond-mat.dis-nn cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural networks have been shown to have a remarkable ability to uncover low dimensional structure in data: the space of possible reconstructed images form a reduced model manifold in image space. We explore this idea directly by analyzing the manifold learned by Deep Belief Networks and Stacked Denoising Autoencoders using Monte Carlo sampling. The model manifold forms an only slightly elongated hyperball with actual reconstructed data appearing predominantly on the boundaries of the manifold. In connection with the results we present, we discuss problems of sampling high-dimensional manifolds as well as recent work [M. Transtrum, G. Hart, and P. Qiu, Submitted (2014)] discussing the relation between high dimensional geometry and model reduction.
2017-05-31T00:00:00
no_new_dataset
false
0.712829
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10591
X. Sharon Hu
Xiaoming Chen, Jianxu Chen, Danny Z. Chen, and Xiaobo Sharon Hu
Optimizing Memory Efficiency for Convolution Kernels on Kepler GPUs
null
null
null
null
cs.DC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Convolution is a fundamental operation in many applications, such as computer vision, natural language processing, image processing, etc. Recent successes of convolutional neural networks in various deep learning applications put even higher demand on fast convolution. The high computation throughput and memory bandwidth of graphics processing units (GPUs) make GPUs a natural choice for accelerating convolution operations. However, maximally exploiting the available memory bandwidth of GPUs for convolution is a challenging task. This paper introduces a general model to address the mismatch between the memory bank width of GPUs and computation data width of threads. Based on this model, we develop two convolution kernels, one for the general case and the other for a special case with one input channel. By carefully optimizing memory access patterns and computation patterns, we design a communication-optimized kernel for the special case and a communication-reduced kernel for the general case. Experimental data based on implementations on Kepler GPUs show that our kernels achieve 5.16X and 35.5% average performance improvement over the latest cuDNN library, for the special case and the general case, respectively.
2017-05-31T00:00:00
no_new_dataset
false
0.713996
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10595
Christopher Portmann
Christopher Portmann
(Quantum) Min-Entropy Resources
39+18 pages, 11 figures
null
null
null
quant-ph cs.CR cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We model (interactive) resources that provide Alice with a string $X$ and a guarantee that any Eve interacting with her interface of the resource obtains a (quantum) system $E$ such that the conditional (smooth) min-entropy of $X$ given $E$ is lower bounded by some $k$. This (abstract) resource specification encompasses any setting that results in the honest players holding such a string (or aborting). For example, it could be constructed from, e.g., noisy channels, quantum key distribution (QKD), or a violation of Bell inequalities, which all may be used to derive bounds on the min-entropy of $X$. As a first application, we use this min-entropy resource to modularize key distribution (KD) schemes by dividing them in two parts, which may be analyzed separately. In the first part, a KD protocol constructs a min-entropy resource given the (physical) resources available in the specific setting considered. In the second, it distills secret key from the min-entropy resource---i.e., it constructs a secret key resource. We prove security for a generic key distillation protocol that may use any min-entropy resource. Since the notion of resource construction is composable---security of a composed protocol follows from the security of its parts--- this reduces proving security of a KD protocol (e.g., QKD) to proving that it constructs a min-entropy resource. As a second application, we provide a composable security proof for the recent Fehr-Salvail protocol [EUROCRYPT 2017] that authenticates classical messages with a quantum message authentication code (Q-MAC), and recycles all the key upon successfully verifying the authenticity of the message. This protocol uses (and recycles) a non-uniform key, which we model as consuming and constructing a min-entropy resource.
2017-05-31T00:00:00
no_new_dataset
false
0.709074
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10596
Zhulin Liu
Zhulin Liu and C. L. Philip Chen
Approximation learning methods of Harmonic Mappings in relation to Hardy Spaces
2016 3rd International Conference on Informative and Cybernetics for Computational Social Systems (ICCSS)
null
10.1109/ICCSS.2016.7586421
null
math.NA cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new Hardy space Hardy space approach of Dirichlet type problem based on Tikhonov regularization and Reproducing Hilbert kernel space is discussed in this paper, which turns out to be a typical extremal problem located on the upper upper-high complex plane. If considering this in the Hardy space, the optimization operator of this problem will be highly simplified and an efficient algorithm is possible. This is mainly realized by the help of reproducing properties of the functions in the Hardy space of upper-high complex plane, and the detail algorithm is proposed. Moreover, harmonic mappings, which is a significant geometric transformation, are commonly used in many applications such as image processing, since it describes the energy minimization mappings between individual manifolds. Particularly, when we focus on the planer mappings between two Euclid planer regions, the harmonic mappings are exist and unique, which is guaranteed solidly by the existence of harmonic function. This property is attractive and simulation results are shown in this paper to ensure the capability of applications such as planer shape distortion and surface registration.
2017-05-31T00:00:00
no_new_dataset
false
0.709627
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10608
Birte Schmidtmann
Birte Schmidtmann, Pawel Buchm\"uller, Manuel Torrilhon
Third-order Limiting for Hyperbolic Conservation Laws applied to Adaptive Mesh Refinement and Non-Uniform 2D Grids
null
null
null
null
math.NA cs.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we extend the recently developed third-order limiter function $H_{3\text{L}}^{(c)}$ [J. Sci. Comput., (2016), 68(2), pp.~624--652] to make it applicable for more elaborate test cases in the context of finite volume schemes. This work covers the generalization to non-uniform grids in one and two space dimensions, as well as two-dimensional Cartesian grids with adaptive mesh refinement (AMR). The extension to 2D is obtained by the common approach of dimensional splitting. In order to apply this technique without loss of third-order accuracy, the order-fix developed by Buchm\"uller and Helzel [J. Sci. Comput., (2014), 61(2), pp.~343--368] is incorporated into the scheme. Several numerical examples on different grid configurations show that the limiter function $H_{3\text{L}}^{(c)}$ maintains the optimal third-order accuracy on smooth profiles and avoids oscillations in case of discontinuous solutions.
2017-05-31T00:00:00
no_new_dataset
false
0.710979
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10614
Mahesh Babu Vaddi
Mahesh Babu Vaddi and B. Sundar Rajan
Near-Optimal Vector Linear Index Codes For Single Unicast Index Coding Problems with Symmetric Neighboring Interference
14 pages, 8 figures and 3 tables. arXiv admin note: substantial text overlap with arXiv:1705.05060, arXiv:1705.03192
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A single unicast index coding problem (SUICP) with symmetric neighboring interference (SNI) has equal number of $K$ messages and $K$ receivers, the $k$th receiver $R_{k}$ wanting the $k$th message $x_{k}$ and having the side-information $\mathcal{K}_{k}=(\mathcal{I}_{k} \cup x_{k})^c,$ where ${I}_k= \{x_{k-U},\dots,x_{k-2},x_{k-1}\}\cup\{x_{k+1}, x_{k+2},\dots,x_{k+D}\}$ is the interference with $D$ messages after and $U$ messages before its desired message. Maleki, Cadambe and Jafar obtained the capacity of this single unicast index coding problem with symmetric neighboring interference (SUICP-SNI) with $K$ tending to infinity and Blasiak, Kleinberg and Lubetzky for the special case of $(D=U=1)$ with $K$ being finite. In our previous work, we proved the capacity of SUICP-SNI for arbitrary $K$ and $D$ with $U=\text{gcd}(K,D+1)-1$. This paper deals with near-optimal linear code construction for SUICP-SNI with arbitrary $K,U$ and $D.$ For SUICP-SNI with arbitrary $K,U$ and $D$, we define a set of $2$-tuples such that for every $(a,b)$ in that set the rate $D+1+\frac{a}{b}$ is achieved by using vector linear index codes over every field. We prove that the set $\mathcal{\mathbf{S}}$ consists of $(a,b)$ such that the rate of constructed vector linear index codes are at most $\frac{K~\text{mod}~(D+1)}{\left \lfloor \frac{K}{D+1} \right \rfloor}$ away from a known lower bound on broadcast rate of SUICP-SNI. The three known results on the exact capacity of the SUICP-SNI are recovered as special cases of our results. Also, we give a low complexity decoding procedure for the proposed vector linear index codes for the SUICP-SNI.
2017-05-31T00:00:00
no_new_dataset
false
0.704647
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10618
Lu-Xing Yang
Lu-Xing Yang, Tianrui Zhang, Xiaofan Yang, Yingbo Wu, Yuan Yan Tang
On the effectiveness of the truth-spreading/rumor-blocking strategy for restraining rumors
rumor spreading, truth-spreading/rumor-blocking strategy, effectiveness, individual-level spreading model, qualitative analysis of dynamical system, network structure. arXiv admin note: substantial text overlap with arXiv:1705.06604; text overlap with arXiv:1705.04818
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spreading truths and blocking rumors are two typical strategies for inhibiting rumors. In practice, a tradeoff between the two strategies, which is known as the TSRB strategy, may achieve a better cost-effectiveness. This paper is devoted to assessing the effectiveness of the TSRB strategy. For that purpose, an individual-level spreading model (the generic URQT model) capturing the interaction between a rumor and the truth is established. Under the model, a set of criteria for the dying out of a rumor is presented. These criteria capture the combined influence of the basic parameters and the network structures on the effectiveness of the TSRB strategy. Experimental results show that, when the rumor dies out, the dynamics of a simplified URQT model (the linear URQT model) fits well with the actual rumor-truth interacting process. Therefore, the generic URQT model and sometimes the linear URQT model provide a proper basis for assessing the effectiveness of the TSRB strategy.
2017-05-31T00:00:00
no_new_dataset
false
0.710626
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10622
Didier Clamond
Didier Clamond (JAD)
Remarks on bernoulli constants, gauge conditions and phase velocities in the context of water waves
null
null
null
null
physics.flu-dyn physics.class-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This short note is about the gauge condition for the velocity potential, the definitions of the Bernoulli constant and of the velocity speeds in the context of water waves. These definitions are often implicit and thus the source of confusion in the literature. This
2017-05-31T00:00:00
no_new_dataset
false
0.714625
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10624
Maksim Skorobogatiy
Kathirvel Nallappan, Jingwen Li, Hichem Guerboukha, Andrey Markov, Branko Petrov, Denis Morris and Maksim Skorobogatiy
A Dynamically Reconfigurable Terahertz Array Antenna for Near-field Imaging Applications
null
null
null
null
physics.optics physics.ins-det
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A proof of concept for high speed near-field imaging with sub-wavelength resolution using SLM is presented. An 8 channel THz detector array antenna with an electrode gap of 100 um and length of 5 mm is fabricated using the commercially available GaAs semiconductor substrate. Each array antenna can be excited simultaneously by spatially reconfiguring the optical probe beam and the THz electric field can be recorded using 8 channel lock-in amplifiers. By scanning the probe beam along the length of the array antenna, a 2D image can be obtained with amplitude, phase and frequency information.
2017-05-31T00:00:00
no_new_dataset
false
0.707317
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10625
Cl\'ement Miklarz
Pascal Caron, Ludovic Mignot, Cl\'ement Miklarz
On the decidability of $k$-Block determinism
15 pages, 13 figures, Submitted to Information and Computation, Continuing arXiv:1512.05475
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Br\"uggemann-Klein and Wood define a one-unambiguous regular language as a language that can be recognized by a deterministic Glushkov automaton. They give a procedure performed on the minimal DFA, the BW-test, to decide whether a language is one-unambiguous. Block determinism is an extension of one-unambiguity while considering non-empty words as symbols and prefix-freeness as determinism. A block automaton is compact if it does not have two equivalent states (same right language). We showed that a language is $k$-block deterministic if it is recognized by some deterministic $k$-block automaton passing the BW-test. In this paper, we show that any $k$-block deterministic language is recognized by a compact deterministic $k$-block automaton passing the BW-test. We also give a procedure which enumerates, for a given language, the finite set of compact deterministic $k$-block automata. It gives us a decidable procedure to test whether a language is $k$-block deterministic.
2017-05-31T00:00:00
no_new_dataset
false
0.708602
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10630
Hemani Kaushal Dr.
Hemani Kaushal and Georges Kaddoum
Optical Communication in Space: Challenges and Mitigation Techniques
41 pages, 13 Figures and 8 Tables. arXiv admin note: substantial text overlap with arXiv:1506.04836
IEEE Communications Surveys & Tutorials ( Volume: 19, Issue: 1, Firstquarter 2017 ), pp. 57-96, 2016
10.1109/COMST.2016.2603518
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, free space optical communication has gained significant importance owing to its unique features: large bandwidth, license-free spectrum, high data rate, easy and quick deployability, less power and low mass requirements. FSO communication uses the optical carrier in the near infrared band to establish either terrestrial links within the Earth's atmosphere or inter-satellite or deep space links or ground-to-satellite or satellite-to-ground links. However, despite the great potential of FSO communication, its performance is limited by the adverse effects viz., absorption, scattering, and turbulence of the atmospheric channel. This paper presents a comprehensive survey on various challenges faced by FSO communication system for ground-to-satellite or satellite-to-ground and inter-satellite links. It also provides details of various performance mitigation techniques in order to have high link availability and reliability. The first part of the paper will focus on various types of impairments that pose a serious challenge to the performance of optical communication system for ground-to-satellite or satellite-to-ground and inter-satellite links. The latter part of the paper will provide the reader with an exhaustive review of various techniques both at physical layer as well as at the other layers i.e., link, network or transport layer to combat the adverse effects of the atmosphere. It also uniquely presents a recently developed technique using orbital angular momentum for utilizing the high capacity advantage of the optical carrier in case of space-based and near-Earth optical communication links. This survey provides the reader with comprehensive details on the use of space-based optical backhaul links in order to provide high-capacity and low-cost backhaul solutions.
2017-05-31T00:00:00
no_new_dataset
false
0.712242
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10633
Tom Vander Aa
Tom Vander Aa, Imen Chakroun and Tom Haber
Distributed Matrix Factorization using Asynchrounous Communication
arXiv admin note: substantial text overlap with arXiv:1705.04159
null
10.1016/j.procs.2017.05.009
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using the matrix factorization technique in machine learning is very common mainly in areas like recommender systems. Despite its high prediction accuracy and its ability to avoid over-fitting of the data, the Bayesian Probabilistic Matrix Factorization algorithm (BPMF) has not been widely used on large scale data because of the prohibitive cost. In this paper, we propose a distributed high-performance parallel implementation of the BPMF using Gibbs sampling on shared and distributed architectures. We show by using efficient load balancing using work stealing on a single node, and by using asynchronous communication in the distributed version we beat state of the art implementations.
2017-05-31T00:00:00
no_new_dataset
false
0.709693
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10638
Stefano Dafarra
Stefano Dafarra, Francesco Romano and Francesco Nori
A Receding Horizon Push Recovery Strategy for Balancing the iCub Humanoid Robot
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Balancing and reacting to strong and unexpected pushes is a critical requirement for humanoid robots. We recently designed a capture point based approach which interfaces with a momentum-based torque controller and we implemented and validated it on the iCub humanoid robot. In this work we implement a Receding Horizon control, also known as Model Predictive Control, to add the possibility to predict the future evolution of the robot, especially the constraints switching given by the hybrid nature of the system. We prove that the proposed MPC extension makes the step-recovery controller more robust and reliable when executing the recovery strategy. Experiments in simulation show the results of the proposed approach.
2017-05-31T00:00:00
no_new_dataset
false
0.710353
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10639
Rick Smetsers
Rick Smetsers
Grammatical Inference as a Satisfiability Modulo Theories Problem
Submitted and selected for oral presentation at the LearnAut workshop at LICS 2017
null
null
null
cs.FL cs.LG cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The problem of learning a minimal consistent model from a set of labeled sequences of symbols is addressed from a satisfiability modulo theories perspective. We present two encodings for deterministic finite automata and extend one of these for Moore and Mealy machines. Our experimental results show that these encodings improve upon the state-of-the-art, and are useful in practice for learning small models.
2017-05-31T00:00:00
no_new_dataset
false
0.704124
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10640
Jordan Hachtel
Jordan A. Hachtel, Sang Yeon Cho, Roderick B. Davidson II, Matthew F. Chisholm, Richard F. Haglund, Juan Carlos Idrobo, Sokrates T. Pantelides, Benjamin J. Lawrie
Spatially and spectrally resolved orbital angular momentum interactions in plasmonic vortex generators
null
null
null
null
physics.optics
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the near-field electromagnetic interactions that produce optical orbital angular momentum (OAM) is central to the integration of twisted light into nanotechnology. Here, we examine the cathodoluminescence (CL) of plasmonic vortices carrying OAM generated in spiral nanostructures through scanning transmission electron microscopy (STEM). The nanospiral geometry defines the photonic local density of states (LDOS) sampled by STEM-CL, which provides access to the phase and amplitude of the plasmonic vortex with nanometer spatial and meV spectral resolution. We map the full spectral dispersion of the plasmonic vortex in the spiral structure and examine the effects of increasing topological charge on the plasmon phase and amplitude in the detected CL signal. The vortex is mapped in CL over a broad spectral range, and deviations between the predicted and detected positions of near-field optical signatures of as much as 5 per cent are observed. Finally, enhanced luminescence is observed from concentric spirals of like handedness compared to that from concentric spirals of opposite handedness, indicating the potential to couple plasmonic vortices to chiral nanostructures for sensitive detection and manipulation of optical OAM.
2017-05-31T00:00:00
no_new_dataset
false
0.711189
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10649
Vincent Neiger
Vincent Neiger, Thi Xuan Vu
Computing Canonical Bases of Modules of Univariate Relations
8 pages, uses acmart sigconf
null
10.1145/3087604.3087656
null
cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the computation of canonical bases of sets of univariate relations $(p_1,\ldots,p_m) \in \mathbb{K}[x]^{m}$ such that $p_1 f_1 + \cdots + p_m f_m = 0$; here, the input elements $f_1,\ldots,f_m$ are from a quotient $\mathbb{K}[x]^n/\mathcal{M}$, where $\mathcal{M}$ is a $\mathbb{K}[x]$-module of rank $n$ given by a basis $\mathbf{M}\in\mathbb{K}[x]^{n\times n}$ in Hermite form. We exploit the triangular shape of $\mathbf{M}$ to generalize a divide-and-conquer approach which originates from fast minimal approximant basis algorithms. Besides recent techniques for this approach, we rely on high-order lifting to perform fast modular products of polynomial matrices of the form $\mathbf{P}\mathbf{F} \bmod \mathbf{M}$. Our algorithm uses $O\tilde{~}(m^{\omega-1}D + n^{\omega} D/m)$ operations in $\mathbb{K}$, where $D = \mathrm{deg}(\det(\mathbf{M}))$ is the $\mathbb{K}$-vector space dimension of $\mathbb{K}[x]^n/\mathcal{M}$, $O\tilde{~}(\cdot)$ indicates that logarithmic factors are omitted, and $\omega$ is the exponent of matrix multiplication. This had previously only been achieved for a diagonal matrix $\mathbf{M}$. Furthermore, our algorithm can be used to compute the shifted Popov form of a nonsingular matrix within the same cost bound, up to logarithmic factors, as the previously fastest known algorithm, which is randomized.
2017-05-31T00:00:00
no_new_dataset
false
0.704637
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10654
Mahmood Mamivand
Mahmood Mamivand, Ying Yang, Jeremy Busby, Dane Morgan
Integrated Modeling of Second Phase Precipitation in Cold-Worked 316 Stainless Steels under Irradiation
null
Acta Materialia, Volume 130, 15 May 2017, Pages 94 to 110
null
null
cond-mat.mtrl-sci physics.app-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The current work combines the Cluster Dynamics (CD) technique and CALPHAD-based precipitation modeling to address the second phase precipitation in cold-worked (CW) 316 stainless steels (SS) under irradiation at 300-400 C. CD provides the radiation enhanced diffusion and dislocation evolution as inputs for the precipitation model. The CALPHAD-based precipitation model treats the nucleation, growth and coarsening of precipitation processes based on classical nucleation theory and evolution equations, and simulates the composition, size and size distribution of precipitate phases. We benchmark the model against available experimental data at fast reactor conditions (9.4 x 10^-7 dpa/s and 390 C) and then use the model to predict the phase instability of CW 316 SS under light water reactor (LWR) extended life conditions (7 x 10^-8 dpa/s and 275 C). The model accurately predicts the gamma-prime (Ni3Si) precipitation evolution under fast reactor conditions and that the formation of this phase is dominated by radiation enhanced segregation. The model also predicts a carbide volume fraction that agrees well with available experimental data from a PWR reactor but is much higher than the volume fraction observed in fast reactors. We propose that radiation enhanced dissolution and/or carbon depletion at sinks that occurs at high flux could be the main sources of this inconsistency. The integrated model predicts ~1.2% volume fraction for carbide and ~3.0% volume fraction for gamma-prime for typical CW 316 SS (with 0.054 wt.% carbon) under LWR extended life conditions. This work provides valuable insights into the magnitudes and mechanisms of precipitation in irradiated CW 316 SS for nuclear applications.
2017-05-31T00:00:00
no_new_dataset
false
0.710848
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10658
Vincent Neiger
Vincent Neiger, Johan Rosenkilde, Eric Schost
Fast Computation of the Roots of Polynomials Over the Ring of Power Series
8 pages, uses acmart sigconf
null
10.1145/3087604.3087642
null
cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give an algorithm for computing all roots of polynomials over a univariate power series ring over an exact field $\mathbb{K}$. More precisely, given a precision $d$, and a polynomial $Q$ whose coefficients are power series in $x$, the algorithm computes a representation of all power series $f(x)$ such that $Q(f(x)) = 0 \bmod x^d$. The algorithm works unconditionally, in particular also with multiple roots, where Newton iteration fails. Our main motivation comes from coding theory where instances of this problem arise and multiple roots must be handled. The cost bound for our algorithm matches the worst-case input and output size $d \deg(Q)$, up to logarithmic factors. This improves upon previous algorithms which were quadratic in at least one of $d$ and $\deg(Q)$. Our algorithm is a refinement of a divide \& conquer algorithm by Alekhnovich (2005), where the cost of recursive steps is better controlled via the computation of a factor of $Q$ which has a smaller degree while preserving the roots.
2017-05-31T00:00:00
no_new_dataset
false
0.705299
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10659
Xiatian Zhu
Jingya Wang, Xiatian Zhu, Shaogang Gong
Discovering Visual Concept Structure with Sparse and Incomplete Tags
Artificial Intelligence journal 2017
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Discovering automatically the semantic structure of tagged visual data (e.g. web videos and images) is important for visual data analysis and interpretation, enabling the machine intelligence for effectively processing the fast-growing amount of multi-media data. However, this is non-trivial due to the need for jointly learning underlying correlations between heterogeneous visual and tag data. The task is made more challenging by inherently sparse and incomplete tags. In this work, we develop a method for modelling the inherent visual data concept structures based on a novel Hierarchical-Multi-Label Random Forest model capable of correlating structured visual and tag information so as to more accurately interpret the visual semantics, e.g. disclosing meaningful visual groups with similar high-level concepts, and recovering missing tags for individual visual data samples. Specifically, our model exploits hierarchically structured tags of different semantic abstractness and multiple tag statistical correlations in addition to modelling visual and tag interactions. As a result, our model is able to discover more accurate semantic correlation between textual tags and visual features, and finally providing favourable visual semantics interpretation even with highly sparse and incomplete tags. We demonstrate the advantages of our proposed approach in two fundamental applications, visual data clustering and missing tag completion, on benchmarking video (i.e. TRECVID MED 2011) and image (i.e. NUS-WIDE) datasets.
2017-05-31T00:00:00
no_new_dataset
false
0.711108
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset
1705.10664
Jiaji Zhou
Jiaji Zhou, J. Andrew Bagnell and Matthew T. Mason
A Fast Stochastic Contact Model for Planar Pushing and Grasping: Theory and Experimental Validation
Robotics: Science and Systems 2017
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Based on the convex force-motion polynomial model for quasi-static sliding, we derive the kinematic contact model to determine the contact modes and instantaneous object motion on a supporting surface given a position controlled manipulator. The inherently stochastic object-to-surface friction distribution is modelled by sampling physically consistent parameters from appropriate distributions, with only one parameter to control the amount of noise. Thanks to the high fidelity and smoothness of convex polynomial models, the mechanics of patch contact is captured while being computationally efficient without mode selection at support points. The motion equations for both single and multiple frictional contacts are given. Simulation based on the model is validated with robotic pushing and grasping experiments.
2017-05-31T00:00:00
no_new_dataset
false
0.711686
2026-01-25T00:43:33.318544
davanstrien/ModernBERT-base-is-new-arxiv-dataset