Kirkpatrick Buur (commadirt98)
We show that deep learning attains super-resolution with challenging contrast-agent densities, both in-silico as well as in-vivo. Deep-ULM is suitable for real-time applications, resolving about 70 high-resolution patches ( 128×128 pixels) per second on a standard PC. Exploiting GPU computation, this number increases to 1250 patches per second.People with diabetes are at risk of developing an eye disease called diabetic retinopathy (DR). This disease occurs when high blood glucose levels cause damage to blood vessels in the retina. Computer-aided DR diagnosis has become a promising tool for the early detection and severity grading of DR, due to the great success of deep learning. However, most current DR diagnosis systems do not achieve satisfactory performance or interpretability for ophthalmologists, due to the lack of training data with consistent and fine-grained annotations. To address this problem, we construct a large fine-grained annotated DR dataset containing 2,842 images (FGADR). selleck chemicals Specifically, this dataset has 1,842 images with pixel-level DR-related lesion annotations, and 1,000 images with image-level labels graded by six board-certified ophthalmologists with intra-rater consistency. The proposed dataset will enable extensive studies on DR diagnosis. Further, we establish three benchmark tasks for evaluation 1. DR lesion segmentation; 2. DR grading by joint classification and segmentation; 3. Transfer learning for ocular multi-disease identification. Moreover, a novel inductive transfer learning method is introduced for the third task. Extensive experiments using different state-of-the-art methods are conducted on our FGADR dataset, which can serve as baselines for future research. Our dataset will be released in https//csyizhou.github.io/FGADR/.Short-term monitoring of lesion changes has been a widely accepted clinical guideline for melanoma screening. When there is a significant change of a melanocytic lesion at three months, the lesion will be excised to exclude melanoma. However, the decision on change or no-change heavily depends on the experience and bias of individual clinicians, which is subjective. For the first time, a novel deep learning based method is developed in this paper for automatically detecting short-term lesion changes in melanoma screening. The lesion change detection is formulated as a task measuring the similarity between two dermoscopy images taken for a lesion in a short time-frame, and a novel Siamese structure based deep network is proposed to produce the decision changed (i.e. not similar) or unchanged (i.e. similar enough). Under the Siamese framework, a novel structure, namely Tensorial Regression Process, is proposed to extract the global features of lesion images, in addition to deep convolutional features. In order to mimic the decision-making process of clinicians who often focus more on regions with specific patterns when comparing a pair of lesion images, a segmentation loss (SegLoss) is further devised and incorporated into the proposed network as a regularization term. To evaluate the proposed method, an in-house dataset with 1,000 pairs of lesion images taken in a short time-frame at a clinical melanoma centre was established. Experimental results on this first-of-a-kind large dataset indicate that the proposed model is promising in detecting the short-term lesion change for objective melanoma screening.Although multi-view learning has made significant progress over the past few decades, it is still challenging due to the difficulty in modeling complex correlations among different views, especially under the context of view missing. To address the challenge, we propose a novel framework termed Cross Partial Multi-View Networks (CPM-Nets), which aims to fully and flexibly take advantage of multiple partial views. We first provide a formal definition of completeness and versatility for multi-view representation and then theoretically prove the versatilit