Astrup Barnes (factdime74)

Recent progress on salient object detection (SOD) mostly benefits from the explosive development of Convolutional Neural Networks (CNNs). However, much of the improvement comes with the larger network size and heavier computation overhead, which, in our view, is not mobile-friendly and thus difficult to deploy in practice. To promote more practical SOD systems, we introduce a novel Stereoscopically Attentive Multi-scale (SAM) module, which adopts a stereoscopic attention mechanism to adaptively fuse the features of various scales. Embarking on this module, we propose an extremely lightweight network, namely SAMNet, for SOD. Extensive experiments on popular benchmarks demonstrate that the proposed SAMNet yields comparable accuracy with state-of-the-art methods while running at a GPU speed of 343fps and a CPU speed of 5fps for 336 ×336 inputs with only 1.33M parameters. Therefore, SAMNet paves a new path towards SOD. The source code is available on the project page https//mmcheng.net/SAMNet/.The kinetic analysis of 18F-FET time-activity curves (TAC) can provide valuable diagnostic information in glioma patients. The analysis is most often limited to the average TAC over a large tissue volume and is normally assessed by visual inspection or by evaluating the time-to-peak and linear slope during the late uptake phase. Here, we derived and validated a linearized model for TACs of 18F-FET in dynamic PET scans. Emphasis was put on the robustness of the numerical parameters and how reliably automatic voxel-wise analysis of TAC kinetics was possible. The diagnostic performance of the extracted shape parameters for the discrimination between isocitrate dehydrogenase (IDH) wildtype (wt) and IDH-mutant (mut) glioma was assessed by receiver-operating characteristic in a group of 33 adult glioma patients. A high agreement between the adjusted model and measured TACs could be obtained and relative, estimated parameter uncertainties were small. The best differentiation between IDH-wt and IDH-mut gliomas was achieved with the linearized model fitted to the averaged TAC values from dynamic FET PET data in the time interval 4-50 min p.i.. When limiting the acquisition time to 20-40 min p.i., classification accuracy was only slightly lower (-3%) and was comparable to classification based on linear fits in this time interval. Voxel-wise fitting was possible within a computation time ≈ 1 min per image slice. Parameter uncertainties smaller than 80% for all fits with the linearized model were achieved. The agreement of best-fit parameters when comparing voxel-wise fits and fits of averaged TACs was very high (p less then 0.001).Imitation learning has recently been applied to mimic the operation of a cameraman in existing autonomous camera systems. read more To imitate different filming styles, these methods have to train multiple independent models, where each model requires a significant number of training samples to learn one specific style. In this paper, we propose a framework, which can imitate a filming style by seeing only a single demonstration video of the target style, i.e., one-shot imitation filming. This is achieved by two key enabling techniques 1) filming style feature extraction, which encodes sequential cinematic characteristics of a variable-length video clip into a fixed-length feature vector, and 2) camera motion prediction, which dynamically plans the camera trajectory to reproduce the filming style of the demo video. We implemented the approach with a deep neural network and deployed it on a 6 degrees of freedom (DOF) drone system by first predicting the future camera motions, and then converting them into the drone's control commands via an odometer. Our experimental results on comprehensive datasets and showcases exhibit that the proposed approach achieves significant improvements over conventional baselines, and our approach can mimic the footage of an unseen style with high fidelity.Remarkable achievements have been obt