Tag Archives: phospholipase a2 inhibitor

The finding of an aortic phenotype in a

The finding of an aortic phospholipase a2 inhibitor in a mouse model with a complete Smad3 deficiency very similar to the human disease supports the idea that also lack of functional SMAD3 could cause the human clinical phenotype. A similar scenario has been described for the TGF-β2 mouse model (Boileau et al., 2012). Moreover, in aneurysmal diseases such as Marfan\’s syndrome (MFS), the efficacy of interventions that target the TGF-β signaling pathway is being explored. So far the effects on delay of aneurysmal growth are quite promising (Neptune et al., 2003; Ng et al., 2004; Habashi et al., 2006; Cohn et al., 2007). However, similar intervention strategies might not be beneficial in case of a Smad3 deficiency. Since downstream transcriptional activation is hampered in the absence of Smad3, inhibition of components in the TGF-β signaling pathway might in this case worsen the outcome as even less ECM would be generated, and alternative ‘escape’ pathways would be blocked.
In conclusion, Smad3 deficiency leads to aortic aneurysms and sudden death in the Smad3 knockout animal model. This phenotype is influenced by age and gender of the animals. Although Smad3 is absent, we observed increased nuclear translocation of pSmad2, and upregulated pERK signaling, inferring increased upstream TGF-β receptor activation. However, the downstream TGF-β-activated transcriptional response seemed impaired as derived from the absence of MMP activation and lack of amorphous ECM accumulation phospholipase a2 inhibitor in Smad3−/− mouse aortas. Together our data stress the importance of identifying the molecular mechanism of aneurysmal disease, as the outcome, and therefore treatment options, can differ dramatically. At the same time, the Smad3−/− mouse proves to be an ideal model to start testing these different interventional options on.
The following are the supplementary data related to this article.

Funding Sources
This study was funded by ‘Stichting lijf en leven’ (project: dilating versus stenosing arterial disease, 2011–2015), and partially funded by an Erasmus Fellowship (2009) to AM Bertoli-Avella. Funders had no role in study design, data collection, data analysis, interpretation or writing of the manuscript.

Conflicts of Interest

Author contributions

Acknowledgements
We would like to acknowledge Fumiko Itoh, Ph.D. from the Department of Experimental Pathology, University of Tsukuba, Japan, for generous donation of Smad3 mutant mice.

Introduction
Vaccine confidence is an increasingly important global public health issue, with decreases in confidence leading to well-documented cases of disease outbreaks, setbacks to global polio eradication as well as other immunization goals, and contentious political debates in high and low-income countries alike (Brown et al., 2010; Hanley et al., 2015; Khetsuriani et al., 2010; Larson et al., 2011; Yu et al., 2016). The World Health Organization\’s (WHO) Strategic Advisory Group of Experts (SAGE) on Immunization (WHO, 2014) as well as national immunization programmes (US Dept. Health and Human Services, 2015) have called for better monitoring of vaccine confidence and hesitancy to inform the development of communication and other interventions to address confidence gaps, to sustain confidence in vaccines and immunization programmes and to avert confidence crises and their public health consequences.
In March 2012, the SAGE Working Group on Vaccine Hesitancy convened to define “vaccine hesitancy” and to develop and standardize survey frameworks within which the scale and determinants of vaccine hesitancy and vaccine confidence can be measured (Larson et al., 2015a, 2015b). A number of studies have since investigated attitudes towards vaccines in diverse contexts, including investigation of attitudes towards immunization programmes (Dubé et al., 2016), vaccine hesitancy among general practitioners (GPs) (Verger et al., 2015), the detrimental effects of non-voluntary immunization campaigns (especially amongst those already expressing negative vaccine sentiment) (Betsch and Böhm, 2016), and social network analyses identifying clustering of vaccine-refusing households (Onnela et al., 2016).

The main finding of our study is the significant correlation

The main finding of our study is the significant correlation between vascularity parameters of the ileal wall and VEGF serum levels detected in the fasting state. These findings, which are in keeping with those of Di Sabatino et al. (2004) (although obtained with different methodology and in CD patients with active disease), confirm the role of VEGF in angiogenesis and vascular remodeling of CD, also in patients in clinical remission. The link between phospholipase a2 inhibitor and angiogenesis suggests that angiogenesis could also be a potential target of therapies for CD. The relationship between parietal blood flow and VEGF has been observed only in the fasting state. This may be attributed to the fact that in post-prandial period, other factors may affect vascularity of the bowel wall (e.g., gastrointestinal hormones, enteric nervous system). NO is included among the factors that may play a role, as suggested by the results of our study.
We also observed a correlation between vascularity indexes of the ileal wall and increased blood flow in the PV and decreased vascular resistance (RI and PI) of the SMA; this was the case in the fasting state only. These findings confirm that in CD, the diseased bowel wall is the main culprit in impaired splanchnic blood flow (Maconi et al. 1998) and its parietal vascularity plays a relevant role.
We acknowledge that this study may be limited by the small number of patients included, the high variability of some measurements of intestinal blood flow and the failure to obtain some parameters of bowel vascularity in healthy controls. Furthermore, the depth and overlying tissues of the regions of interest where we assessed bowel vascularity in vivo account for this variability and may have somewhat decreased accuracy in assessing directly the post-prandial intestinal blood flow at the exact site of the fasting examination. In particular, in healthy controls, assessment of vascularity with CEUS in the same intestinal segments before and after a meal was possible in only 4 of 10 individuals.

Acknowledgments

Introduction
Erectile dysfunction (ED) is defined as the consistent inability to attain or maintain a penile erection of sufficient quality to permit satisfactory sexual intercourse (NIH Consensus Development Panel on Impotence 1993). The prevalence of this disorder increases with age. In a cross-sectional study in the United States (Feldman et al. 1994), the prevalence of ED was 22% among men between the ages of 40 and 49 y and increased to 49% among men between the ages of 70 and 79 y. In China, the prevalence of ED was reported to be 26% among men between 41 and 50 y and 65% among men aged 61–70 y (Bai et al. 2004). It has been estimated that the worldwide prevalence of ED phospholipase a2 inhibitor will increase by 111% in 2025 compared with 1995 (Ayta et al. 1999).
The cavernosal artery (CA) supplies the corpora cavernosa via multiple helicine arteries (HAs), and the urethral artery (UA) runs forward along the dorsal side of the tunica albuginea of the corpus spongiosum and provides blood supply to it and the urethra. There are anastomoses between the CA and UA (Diallo et al. 2013), forming a network called cavernosal–spongiosal communications (CSCs). The CSCs are anatomically different from the HAs, although both are branches of the CA. The HAs further give rise to multiple arterioles entering into the corpora cavernosum, whereas the CSCs run vertically toward the ventral side of the penis and enter the corpus spongiosum after passing through the tunica albuginea (Wagner et al. 1982). The CSCs, with a diameter of approximately 0.1–0.4 mm, were thought to carry blood from the CA to the UA under normal conditions, but the blood flow can be reversed during intra-urethral alprostadil-induced erection (Droupy et al. 1999). However, the exact physiologic role of the CSCs connecting two erectile corpora with very different pressure ratings during erection remains unclear.

During the last decade animal research has allowed translation

During the last decade, animal research has allowed translation to human studies of visual texture segregation, including relationships between event-related potentials and localization of sources (Lamme, Van Dijk, & Spekreijse, 1992; Roelfsema et al., 2002). It has been demonstrated, for instance, that this process can be detected in adults with particular VEPs, namely texture phospholipase a2 inhibitor visual evoked potentials (tsVEPs) and that it can be obtained in response to stimuli defined by luminance, orientation, motion and stereo (Bach & Meigen, 1997). Electrophysiological studies that have investigated this process in normal adults have shown a negative component peaking around 200ms after stimulus onset (texture-related negativity) obtained from the difference wave between VEP obtained by textured stimuli versus homogeneous one (Caputo & Casco, 1999; Lamme, Van Dijk, & Spekreijse, 1992). This component originates from the V1 area and is thought to phospholipase a2 inhibitor reflect combination of information from V2 and V3 associative visual areas through feedback connection circuits (Scholte et al., 2008). Therefore, tsVEPs give an intermediary measure of visual processing between lower-level VEPs, which peak at around 100ms, and cognitive event-related responses, which usually peak after 300ms after stimulus appearance (Lachapelle et al., 2004).
Few studies of visual texture segregation have been conducted in children to date, so very little is known about its normal developmental pattern. Early behavioral studies found it to appear around 9–12months of age (Rieth & Sireteanu, 1994; Sireteanu & Rieth, 1992), while VEP studies suggested that the ability to discriminate texture-defined stimuli emerges around 14–18weeks of age (Atkinson & Braddick, 1992). Using VEPs, Arcand et al. (2007) demonstrated a clear developmental pattern characterized by changes in amplitude, latency and scalp distribution of texture segregation processes during the first year of life; tsVEPs appear approximately at 3months of age, continue to develop until 12months but are still immature at this age. Furthermore, van den Boomen, Lamme, and Kemner (2014) have studied the developmental trajectory of visual texture segregation in typically developing children aged from 7 to 18years old. They found significant differences in event-related potentials between age groups 7–8 and 9–10years, as well as between age groups 11–12 and 13–14years, which they considered as the strongest developmental periods for this process. According to these authors, visual texture segmentation continues to develop until early puberty, where it reaches adult-like EEG responses. However, its developmental pattern during early childhood (i.e. between 12months and school-age) remains unknown. Consequently, a better understanding of this developmental trajectory is needed so it may be subsequently used to further study developmental disorders associated with altered visual texture segregation processing or “higher-order” visual analysis, such as autistic spectrum disorder (Rivest et al., 2013), Williams Syndrome (Palomares & Shannon, 2013), and prematurity (Thibault et al., 2007).
Although no study has investigated the effect of prematurity on texture segregation VEPs, it is proposed that preterm birth can disrupt the development of feedforward connections such as it alters the development of the primary visual pathways (e.g. magnocellular-dorsal), as shown by VEPs and source analyses (Hammarrenger et al., 2007; Lassonde et al., 2010; O’Reilly et al., 2010; Tremblay et al., 2014). As higher-level visual processing relies on both magnocellular and parvocellular pathways, the developmental course of texture segregation could also be compromised by prematurity. Moreover, recent imaging studies have shown white matter microstructural alterations in the visual cortex of children born preterm during childhood and adolescence, which have been related to higher risks of visual impairment in this population (Kelly et al., 2014; Oros et al., 2014; Thompson et al., 2014). These studies also support the idea that connections allowing visual processing might be affected in preterm children. Therefore, we hypothesize that altered development of the visual pathways also has a deleterious effect on higher-level visual function development, such as texture segregation processes. Consequently, the goal of the present study was to characterize the developmental pattern of visual texture segregation processing during early childhood using tsVEPs in (1) children born full-term, and (2) children born preterm without major neurological impairment.

Fourier transform infrared spectroscopy FTIR is a powerful

Fourier transform infrared spectroscopy (FTIR) is a powerful technique for quantitative and qualitative characterization of biological specimens/molecules in human and veterinary medicine (Shaw and Mantsch, 2000). The use of FTIR for such applications generally flows from the chemometric analysis of the spectroscopic data, which are composed of overlapping phospholipase a2 inhibitor bands that arise from their constituent molecular species (Dubois and Shaw, 2004; Shaw et al., 2008). These absorption patterns for biological tissues and fluids provide the basis to quantify diagnostically relevant constituents in serum, urine and other biological fluid, and may also be used for the direct diagnosis of diseases (Petrich et al., 2002). The possibility of quantitative method development has been realized by the integration of FTIR techniques with chemometric tools such as partial least squares (PLS), principal component analysis, and other approaches. These tools enable the development of quantitative analytical methods that require no molecular separation techniques or reagents (Hou et al., 2014). In our laboratory, these techniques have been previously used as the basis to develop rapid, reagent-free methods for measuring serum or plasma IgG levels in horses and alpacas (Riley et al., 2007; Burns et al., 2014). If the same approach were to provide an accurate test in canines, FTIR spectroscopy may prove to be a desirable testing method for measurement of canine serum (or plasma) IgG concentrations and further investigations into neonatal immunity.

Materials and methods

Results
Of the 193 canines for which a complete signalment was available, 10 were intact females, 76 were spayed females, 24 were intact males, and 83 were neutered males. The mean age of the study population was 6.8±3.9 years (Table 1). The study population encompassed 49 recognized pure breeds of small, medium and large size, as well as a large number of dogs of mixed breeds (Table 1). Of the 193 dogs for which disease status was available, 46 had no clinical illness at the time of sample collection and 147 had disease at sample collection including: 25 with orthopedic disease, 25 with neoplasia, 21 with non-specific gastrointestinal signs, 12 with suspected immune-mediated disease, 11 with renal disease, 10 with diagnosed endocrine disease, 6 with skin allergies, 5 with neurologic signs, 5 with elevated serum biochemistry values consistent with hepatic disease, and 5 with urinary tract infections. The remaining 22 ill animals included individual diseases affecting the integumentary (1), reproductive (2), cardiovascular (4), ophthalmic (1), and respiratory systems (6), as well as urinary sphincter incompetence (2), vehicular trauma (2) and nonspecific clinical signs where no specific diagnosis was obtained (4). The RID IgG concentrations for the 193 serum samples with available signalment data ranged from 432mg/dL to 4979mg/dL. The laboratory reference range for IgG for clinically healthy dogs was 522–4207mg/dL based on the non-parametric option in the Reference Value Advisor (Geffré et al., 2011).
For the FTIR based assay, the 207 samples were sorted based on the IgG concentrations acquired from the RID method. Eight samples with an RID IgG higher than 4000mg/dL were excluded to reduce the leverage effects of this small number of samples, leaving behind spectra from 199 samples for building and validating the analytical method (Fig. 1). The optimal number of PLS factors was determined to be 17 based on the lowest Monte Carlo cross validation value (RMMCCV=529mg/dL). A scatter plot (Fig. 2) exhibits the correlation between IgG concentrations obtained via RID and FTIR (RMSEC=326mg/dL, RMSEP=404mg/dL). The concordance correlation coefficients for the calibration model data set and the prediction set were 0.91 and 0.85, respectively.
The Bland–Altman plot (Fig. 3) shows the mean value of the difference (FTIR–RID) as −89mg/dL. When compared to high IgG concentrations, this value was small and indicated no significant systematic bias between the two methods. A normal probability plot for the differences (Fig. 4) shows the majority of data points scattered closely around the reference line. Precision of the FTIR test and RID analysis is graphically demonstrated in Figs. 5 and 6. The mean coefficient of variation (CV) for RID was 6.67% and for FTIR it was 18.76%.

br Resources The ASVCP s

Resources
The ASVCP\’s QALS committee has numerous guidelines (available online) to help in the development and implementation of quality systems. Additionally, continuing education opportunities focusing on quality assurance/quality control can be found in an intensive, two part quality assurance course entitled ‘Quality Management for the Veterinary Clinical Pathology Laboratory’ offered through the Veterinary Information Network. Finally, the National Association of Veterinary Technicians in America has recently established a clinical pathology specialty organization for veterinary technicians entitled the Academy of Veterinary Clinical Pathology Technicians, members of which would likely be a huge asset to a veterinary employer hoping to implement and maintain a quality system.

Conflict of interest statement

Introduction
Thrombosis causes a staggering number of human deaths; venous thromboembolism (VTE) alone causes over 500,000 deaths annually in Europe (Cohen et al., 2007). Estimates of canine thrombosis mortality rates are lacking, but thrombosis is the leading cause of death in immune-mediated hemolytic anemia (IMHA; Carr et al., 2002). Many other diseases are linked to increased thrombotic risk in dogs (e.g. protein losing nephropathy or enteropathy, neoplasia, and pancreatitis; Palmer et al., 1998; Laurenson et al., 2010) and it phospholipase a2 inhibitor is likely that rates of thrombosis are underestimated in such veterinary patients because of difficulty in ante-mortem diagnosis (Goggs et al., 2014) and the rapid dissolution of thrombi after death (Moser et al., 1973).
Anticoagulant therapy, although it prevents thrombosis, carries a risk of serious hemorrhage. The exact rates of bleeding vary between human patient populations (Agnelli and Becattini, 2013) and anti-coagulant regimens (Robertson et al., 2015), but severe hemorrhage is generally uncommon (0.2–2% of humans; Kearon, 2004; Carrier et al., 2010; EINSTEIN–PE Investigators et al., 2012; Gómez-Outes et al., 2015). Although anti-coagulant-associated hemorrhage has been reported in dogs (Scott et al., 2009), the rates associated with unfractionated or low molecular weight heparins are reportedly low (Breuhl et al., 2009; Lynch et al., 2014). Despite its infrequency, the risk of severe bleeding cannot be ignored because fatality rates are high, with a death rate of 11.3% (95% confidence interval [CI]: 7.5–15.9%) among human patients who develop a major bleed while receiving anti-coagulants (Carrier et al., 2010).
Ideally, anticoagulants would be administered only if the risk of thrombosis without treatment outweighed the risk of bleeding with treatment for an individual patient. In human medicine, there is considerable interest in laboratory testing to stratify patients according to thrombosis risk so as to target anticoagulants to high-risk individuals (Baglin, 2011; Prandoni et al., 2014). In veterinary medicine, the ability to predict which dogs are at high risk of thrombosis would offer similar benefits in the management of IMHA. Furthermore, it would ensure the costs of anti-coagulant therapy and its associated monitoring are only incurred when the treatment is likely to be beneficial, thus potentially reducing financially driven euthanasia in this often very expensive disease. Additionally, for conditions in which the rate of thrombosis is too low to justify routine anti-coagulation, an inexpensive test capable of identifying dogs at high thrombotic risk could trigger treatment that protects them from the phospholipase a2 inhibitor mortality and morbidity of thrombosis. Beyond individualized therapy, risk stratification would allow investigators to selectively recruit high-risk animals, potentially resulting in smaller, more efficient clinical trials (Mandrekar and Sargent, 2010). Although a large number of potential markers of future thrombosis have been identified in human studies, in the current review, we will focus primarily on those available as clinical veterinary tests before briefly discussing promising novel developments.