Saleh Cain (openplot34)

In this paper, we proposed nested encoder-decoder architecture named T-Net. FOT1 T-Net consists of several small encoder-decoders for each block constituting convolutional network. T-Net overcomes the limitation that U-Net can only have a single set of the concatenate layer between encoder and decoder block. To be more precise, the U-Net symmetrically forms the concatenate layers, so the low-level feature of the encoder is connected to the latter part of the decoder, and the high-level feature is connected to the beginning of the decoder. T-Net arranges the pooling and up-sampling appropriately during the encoding process, and likewise during the decoding process so that feature-maps of various sizes are obtained in a single block. As a result, all features from the low-level to the high-level extracted from the encoder are delivered from the beginning of the decoder to predict a more accurate mask. We evaluated T-Net for the problem of segmenting three main vessels in coronary angiography images. The experiment consisted of a comparison of U-Net and T-Nets under the same conditions, and an optimized T-Net for the main vessel segmentation. As a result, T-Net recorded a Dice Similarity Coefficient score (DSC) of 83.77%, 10.69% higher than that of U-Net, and the optimized T-Net recorded a DSC of 88.97% which was 15.89% higher than that of U-Net. In addition, we visualized the weight activation of the convolutional layer of T-Net and U-Net to show that T-Net actually predicts the mask from earlier decoders. Therefore, we expect that T-Net can be effectively applied to other similar medical image segmentation problems.In order to refine the analysis of the computational power of discrete-time recurrent neural networks (NNs) between the binary-state NNs which are equivalent to finite automata (level 3 in the Chomsky hierarchy), and the analog-state NNs with rational weights which are Turing-complete (Chomsky level 0), we study an intermediate model αANN of a binary-state NN that is extended with α≥0 extra analog-state neurons. For rational weights, we establish an analog neuron hierarchy 0ANNs ⊂ 1ANNs ⊂ 2ANNs ⊆ 3ANNs and separate its first two levels. In particular, 0ANNs coincide with the binary-state NNs (Chomsky level 3) being a proper subset of 1ANNs which accept at most context-sensitive languages (Chomsky level 1) including some non-context-free ones (above Chomsky level 2). We prove that the deterministic (context-free) language L#=0n1n∣n≥1 cannot be recognized by any 1ANN even with real weights. In contrast, we show that deterministic pushdown automata accepting deterministic languages can be simulated by 2ANNs with rational weights, which thus constitute a proper superset of 1ANNs. Finally, we prove that the analog neuron hierarchy collapses to 3ANNs by showing that any Turing machine can be simulated by a 3ANN having rational weights, with linear-time overhead.Graph Neural Networks (GNNs) have become a topic of intense research recently due to their powerful capability in high-dimensional classification and regression tasks for graph-structured data. However, as GNNs typically define the graph convolution by the orthonormal basis for the graph Laplacian, they suffer from high computational cost when the graph size is large. This paper introduces a Haar basis, which is a sparse and localized orthonormal system for a coarse-grained chain on the graph. The graph convolution under Haar basis, called Haar convolution, can be defined accordingly for GNNs. The sparsity and locality of the Haar basis allow Fast Haar Transforms (FHTs) on the graph, by which one then achieves a fast evaluation of Haar convolution between graph data and filters. We conduct experiments on GNNs equipped with Haar convolution, which demonstrates state-of-the-art results on graph-based regression and node classification tasks.Accurately segmenting contrast-filled vessels from X-ray coronary angiography (XCA) image sequence is an essential step for t