Price Abildtrup (icebush7)
Semantic parsing of anatomical structures in X-ray images is a critical task in many clinical applications. Modern methods leverage deep convolutional networks, and generally require a large amount of labeled data for model training. However, obtaining accurate pixel-wise labels on X-ray images is very challenging due to the appearance of anatomy overlaps and complex texture patterns. In comparison, labeled CT data are more accessible since organs in 3D CT scans preserve clearer structures and thus can be easily delineated. In this paper, we propose a model framework for learning automatic X-ray image parsing from labeled 3D CT scans. Specifically, a Deep Image-to-Image network (DI2I) for multi-organ segmentation is first trained on X-ray like Digitally Reconstructed Radiographs (DRRs) rendered from 3D CT volumes. Then we build a Task Driven Generative Adversarial Network (TD-GAN) to achieve simultaneous synthesis and parsing for unseen real X-ray images. The entire model pipeline does not require any annotations from the X-ray image domain. In the numerical experiments, we validate the proposed model on over 800 DRRs and 300 topograms. While the vanilla DI2I trained on DRRs without any adaptation fails completely on segmenting the topograms, the proposed model does not require any topogram labels and is able to provide a promising average dice of 86% which achieves the same level of accuracy as results from supervised training (89%). Furthermore, we also demonstrate the generality of TD-GAN through quantatitive and qualitative study on widely used public dataset. Trophectoderm (TE) is one of the main components of a day-5 human embryo (blastocyst) that correlates with the embryo's quality. Precise segmentation of TE is an important step toward achieving automatic human embryo quality assessment based on morphological image features. Automatic segmentation of TE, however, is a challenging task and previous work on this is quite limited. In this paper, four fully convolutional deep models are proposed for accurate segmentation of trophectoderm in microscopic images of the human blastocyst. In addition, a multi-scaled ensembling method is proposed that aggregates five models trained at various scales offering trade-offs between the quantity and quality of the spatial information. Furthermore, synthetic embryo images are generated for the first time to address the lack of data in training deep learning models. These synthetically generated images are proven to be effective to fill the generalization gap in deep learning when limited data is available for training. Experimental results confirm that the proposed models are capable of segmenting TE regions with an average Precision, Recall, Accuracy, Dice Coefficient and Jaccard Index of 83.8%, 90.1%, 96.9%, 86.61% and 76.71%, respectively. Particularly, the proposed Inceptioned U-Net model outperforms state-of-the-art by 10.3% in Accuracy, 9.3% in Dice Coefficient and 13.7% in Jaccard Index. Further experiments are conducted to highlight the effectiveness of the proposed models compared to some recent deep learning based segmentation methods. Adiponectin is downregulated in obesity negatively impacting the thermogenesis and impairing white fat browning. Despite the notable effects of green tea (GT) extract in the enhancement of thermogenesis, if its effects are being mediated by adiponectin has been scarcely explored. For this purpose, we investigated the role of adiponectin in the thermogenic actions of GT extract by using an adiponectin-knockout mice model. Male wild-type (WT) and knockout (AdipoKO) C57Bl/6 mice (3 months) were divided into 6 groups mice fed a standard diet+gavage with water (SD WT, and SD AdipoKO), high-fat diet (HFD)+gavage with water (HFD WT, and HFD AdipoKO), and HFD + gavage with 500 mg/kg of body weight (BW) of GT extract (HFD + GT WT, and HFD + GT AdipoKO). After 20 weeks of experimentation, mice were euthanized and adipose tissue was properly remov