Tag Archives: Q-VD(OMe)-OPh

It is difficult to isolate each factor as

It is difficult to isolate each factor as an individual risk for graft failure; undoubtedly, the causality is multifactorial. In our series, graft detachment appeared to be the most significant contributor to graft failure (P-value<0.001). Accordingly, an effective way to further reduce the risk of graft failure is continued innovations of surgical techniques to minimize graft detachment. Glaucoma has been documented in the PKP literature as a significant risk factor for graft failure and, more recently, has been proposed as a risk factor for DSAEK as well. Overall, we found that Q-VD(OMe)-OPh IOP elevation by itself may not increase the risk (P>0.05); however, having had a history of glaucoma, particularly glaucoma drainage device implant, significantly increases predisposition to failure (P<0.001). Our findings corroborate data from the literature. Price et al. reported a 5-year survival rate of 40% in patients with prior glaucoma shunt or trabeculectomy surgery compared to 95% of those without, and the hazard ratio was calculated to be 2. This trend was also observed in PKP patients where pseudophakia, aphakia, and prior glaucoma surgery substantially increased the risks of graft failure. Recently, Hollander et al. reported a 3-year failure rate of 59.1% in PKP patients with Ahmed valves. Our data suggest that preexisting glaucoma is associated with graft failure. The findings that medically treated glaucoma was associated with increased IOP but not graft failure, whereas surgically treated glaucoma was associated with graft failure but not increased IOP, maybe consistent with the existing literature suggesting possible post-GDI breach in blood–aqueous barrier or chronic trauma producing elevated levels of aqueous humor oxidative and inflammatory products that may precipitate corneal endothelial damage. Given that DSAEK patients have a higher incidence of prior glaucoma shunt and trabeculectomy surgeries compared to PKP in a series by Price et al., more investigations are recommended to elucidate the mechanism, natural progression, and methods to improve outcome, either by further innovation in surgical techniques or by optimization of pre- and postoperative management. Caution is needed when interpreting these findings, as the many limitations inherent in a retrospective study were clearly recognized here and may prevent generalization of our results. This series captured cumulative data over a mean follow-up period of 2 years, which is both an advantage and disadvantage. It allows inclusion of a larger database. However, the heterogeneous patient population may confound the analysis; and the evolution of surgical techniques among the four corneal surgeons may have influence on the surgical outcomes. We acknowledge that IOP measurement using Tonopen is an important flaw, as it has been reported to provide inadequate measurement for the management of glaucoma and ocular hypertension. In this series, the Tonopen measurement was only used as a screening parameter and not for IOP management. Nevertheless, the results of this series add to a growing body of literature that aims to address the long-term IOP management in DSAEK patients. The association between graft failure and history of glaucoma surgery, as well as the high incidence of postoperative IOP elevation in this series and others, should prompt the surgeon to focus on preoperative risk assessment and postoperative management to reduce the risks of graft failure and glaucoma.


Epiretinal membrane (ERM) is a glial proliferation on the surface of the retina Q-VD(OMe)-OPh along the internal limiting membrane that may cause visual distortion, blurred vision, or decreased visual acuity when present in the macular area. It can be idiopathic in origin, or a consequence of prior intraocular surgery, inflammation, retinal vasculopathy, or trauma.
Surgical removal of epiretinal membrane by means of pars plana vitrectomy is usually performed in visually symptomatic patients with good post-operative visual outcomes, but many surgeons implement a minimum visual acuity of 20/50−20/60 as a cutoff point for performing surgery. This threshold for surgery leaves no therapeutic option for those with symptomatic ERM and visual acuity of 20/40 or better. In addition, some patients who are symptomatic with advanced ERM might decline vitrectomy or are poor surgical candidates. ERM is a progressive condition; between 10–37% of eyes with ERMs will demonstrate a decrease in visual acuity over three years.

Uncontrolled confounding may have some impact on the results

Uncontrolled confounding may have some impact on the results of the present study. The group of dentists and the controls were different in regard to sex distribution and the number of own amalgam fillings. The dentists also smoked less, but drank slightly more alcohol than the controls. With regard to sex, our stratified analysis in Table 2 did not reveal any differences in cognitive symptoms between the groups irrespective of sex. Apart from information on the occurrence of Q-VD(OMe)-OPh and arterial hypertension, we lacked data on other potential confounders such as drug abuse, mental illness, other cardiovascular diseases, and previous head injuries, but had little reason to assume that these were unevenly distributed between dentists and controls.
Only one of the previous studies on the occurrence of symptoms in dental personnel had made use of the same standardised Euroquest questionnaire [6] that we used in the present study. It is therefore difficult to relate our results to the results of others, except to the recently published Norwegian study and our own study of dental assistants that also made use of Euroquest and reported higher symptom scores in dental assistants than in our dentists [6,7]. A shortcoming in the use of Euroquest is that there are no reliable reference values available yet, and thus, we do not know the occurrence of symptoms in a larger “normal” population. Table 4 gives a short summary of results from earlier investigations where Euroquest scores were reported in a way that is comparable with the results of the present study. This again raises some questions as to whether these results are comparable. In our previous study of dental assistants [7] the controls had higher symptom scores than the controls in the present study. This is probably because the controls used in the previous study constituted a representative sample of the general population while only subjects with a university degree were used as controls for the dentists. As for the French study that reported Euroquest symptoms in a comparable way [23], female controls from the general population had Euroquest scores that were fairly in line with the scores in our female dentists and controls, but here, the possibility of cultural differences should also be taken into account.

Conflict of Interest


Korea, which until recently has had a homogenous society consisting of a single ethnic group and single language, has been rapidly changing to a multicultural society since 2000. During the last decade, migrant workers have been filling empty positions in certain job areas as native Koreans have tended to avoid physically demanding jobs, such as in construction, agriculture, and manufacturing industries. According to the 2008 census report by the Ministry of Public Administration and Security in Korea [1], the number of migrant workers was 440,000 people, or 0.9% of the total population of Korea. Of the migrant workers, about 30% were female workers. Southeastern Asian female workers were the second largest group, following the Chinese workers, and about 70% of the southeastern female workers came from Thailand, Vietnam, and the Philippines.
The rapid increase in the migrant worker population, particularly from southeastern countries, has brought new challenges in terms of the possible effects of acculturation on health. Acculturation is defined as a process of cultural and psychological change that results from continual contact between people of different cultural backgrounds [2]. The process of acculturation is explained by two dimensions: a relative attitude and behavior of the individuals involved towards maintaining one’s own heritage culture and identity, and a relative attitude along with the behavior of the individuals toward becoming involved in the other’s heritage culture and identity. Berry proposed four strategies to describe the differences in an individual’s acculturation process: assimilation (attitude toward not maintaining one’s cultural identity and seeking interaction with other cultures), integration (attitude toward both maintaining one’s original culture and having interactions with other groups), separation (attitude toward maintaining one’s original culture and at the same time avoiding interaction with other ethno-cultural groups), and marginalization (attitude toward little interest either in cultural maintenance or in having relations with others). Those strategies do not refer to the “level” or “degree” of acculturation but individuals have variable degrees of preference for each of all four categories [3].

Adolescent neural development of emotional attentio http www

Adolescent neural development of emotional attention has been initially investigated by a handful of cross-sectional studies. For negative distractors two studies suggest ongoing development of prefrontal regions: Monk et al. (2003) reported lower ACC activation in adults compared to adolescents for evaluating the nose width of emotional faces and Deeley et al. (2008) found decreasing prefrontal activation from childhood over adolescence across adulthood for evaluating the gender of emotional faces. In contrast, Lindstrom et al. (2009) did not find development for negative, but for positive distractors: cuneus and caudate activation decreased with increasing age from 9 to 40 years in a dot-probe task (Lindstrom et al., 2009). Positive distractors also led to stronger orbitofrontal cortex activation in adolescents compared to adults (Monk et al., 2003).
However, longitudinal approaches are mandatory to investigate developmental trajectories precisely. Amongst the sparse number of such studies one found increasing activity from age 10 to 13 in the temporal pole for attending emotional versus neutral faces unconstrained (i.e. passive viewing; Pfeifer et al., 2011).
Paulsen et al. (2015) studied how incentives, age, and performance modulate Q-VD(OMe)-OPh activity during inhibitory control in a longitudinal design including 10- to 22-year-olds. In an incentivized antisaccade task, amygdala-mediated bottom-up processing elicited by loss trials decreased through adolescence. In contrast, activity of prefrontal control regions – which was also associated with better behavioral performance – increased linearly with age.
Change in regions involved in social cognition (including the inferior frontal gyrus, IFG, and superior temporal sulcus) was investigated in 12- to 19-year-olds with a two-year interval using pictures of eye regions (Overgaauw et al., 2014). Activity elicited by evaluating the mental state versus evaluating age/gender was stable in the superior temporal sulcus. While the right IFG showed relative stability, age comparisons revealed a decrease in activation. The medial prefrontal cortex showed a dip of activation in mid-adolescence. Taken together, previous longitudinal studies suggest subtle developmental changes in activation patterns of brain regions crucial for emotion regulation.
We expected a greater neural development for the ignoring emotion condition compared to the attending emotion condition, given previous studies on mid-adolescent behavioral development. Based on previous neuroimaging studies (Monk et al., 2003; Deeley et al., 2008; Paulsen et al., 2015) we expected this development in prefrontal brain regions. Since the systematic contrast with regard to a baseline was missing in most previous studies, we contrasted negative and positive with neutral pictures. Regarding emotional valence, negative versus neutral in contrast to positive versus neutral distractors might elicit stronger distraction (Öhman et al., 2001) and might require a stronger top-down regulation of the prefrontal cortex (PFC), when being ignored. Therefore, and since in general PFC increases were found on both the functional and structural level throughout adolescence (Crone and Dahl, 2012; Gogtay et al., 2004), we expected an increase of prefrontal activity for negative in contrast to positive distractors from age 14 to 16. Additionally, amygdala activation was tested given previous findings of elevated amygdala activation in adolescence compared to children and adults toward negative stimuli in a go-no-go paradigm (Hare et al., 2008).



Overall, patterns of brain activity elicited by the task were consistent with previous studies (Dolcos and McCarthy, 2006; for a review see also Iordan et al., 2013, Vuilleumier and Huang, 2009). Developmental effects between age 14 and 16 were most present in regions related to cognitive top-down control while in contrast activity in regions mediating bottom-up processing was stable. However, by employing a neutral baseline and always contrasting emotional versus neutral conditions, the current results extend previous findings since they demonstrate emotion-specific development. The main novel developmental finding is twofold and partly confirms the a priori hypothesis: First, there was a main effect of age in the bilateral IFG and the ACC indicating an overall increasing activation for emotional attention. Expectedly, this increase in PFC activation was found for ignoring negative emotions. However, this increase in PFC activation was additionally found for ignoring positive emotions and attending negative and positive emotions. Second, rather unexpected, there was a three-way interaction in the left anterior insula indicating higher activation elicited by ignoring negative emotions, as well as attending positive emotions, but a stable neural activity for ignoring positive and attending negative emotions. As expected we did not find amygdala development. A PPI analysis of both amygdalae revealed a 3-way interaction in the posterior cingulate cortex and the precuneus with an increase in connectivity to the right amygdala from age 14 to 16 for attending negative and ignoring positive but not for attending positive and ignoring negative.

br Materials and Methods br Results

Materials and Methods


Several confirmed human carcinogenic agents for occupational LHP cancer have been identified by IARC (Group 1), including benzene, ionizing radiation, Q-VD(OMe)-OPh oxide, 1,3-butadiene, and formaldehyde [12]. Among the chemicals used as raw materials and additives in the company where the seven cases occurred, none were IARC Group 1 human carcinogenic chemicals for LHP cancers (Table 2). The MSDSs for ethylene oxide and formaldehyde were found at some other major semiconductor companies, although their use has been very limited, but not from Plants A and B (Table 5).
It was suspected that various kinds of chemicals, including carcinogens, have been used in the semiconductor industry [2,13]. However, the detailed figures of chemicals and their carcinogenic risk in the industry have been veiled because of trade secrets in the high technology industry. In 2010, HSE found seven kinds of group 1 and 2 carcinogens designated by IARC (antimony trioxide, arsenic compounds, carbon tetrachloride, ceramic fiber, chromium, sulfuric acid, and trichloroethylene) [5], but none of them were LHP cancer-related. In 2007, the National Institute for Occupational Safety and Health in the US (NIOSH) reported the chemicals list used in an IBM plant (Endicotte) [14]. The report listed 20 known human carcinogens (IARC group 1) among 198 chemicals used from the late 1970s to 2004. Of these 20 carcinogens, benzene and formaldehyde were known human carcinogens with strong associations with LHP. However HSE and NIOSH did not measure air levels in the companies at that time.
To the best of our knowledge, air measurements of carcinogenic agents related with LHP cancer in the semiconductor industry have not been previously reported. During 2007 to 2010, OSHRI measured air benzene levels at the workplace of the seven cases, but benzene was not detected from all samples [9]. The maximum benzene level from another study conducted in 2009 [10] was 0.31 ppb (minimum detection level: 0.1 ppb), which did not differ from the outside air concentration (less than 0.30 ppb) measured at the same time. Because most major chemicals used in the FAB department are passed through closed vessels in the automated system, exposure levels of most chemicals are low during normal procedures. The chemical levels from the annual work environment measurements required by the law (Table 4) were also low. Low concentration chemical exposure has also been reported in other studies. The most frequently studied chemicals in the semiconductor industry were [15,16], halides [16], highly toxic gases such as arsine [17], and rare-earth metals [18,19] such as gallium, indium, and arsenic (Table 6). The concentration of gases, metals, glycols, and other organic solvents from FAB industries [16-19] were low.
For benzene, the risk of LHP cancer has been reported to double from 40 ppm-years of cumulative doses of benzene [20]. Compared to Plectonemic winding level, exposure duration of the seven cases was not enough to develop LHP cancer, especially at this low level of exposure. One study reported that even under low levels of exposure, exposure durations longer than 20 years [21], or very high levels of peak exposure (higher than 100 ppm) [22], might increase the risk of LHP cancers. However, the exposure periods of all cases in this study were less than 20 years (6 of them were < 10 years) (Table 1), and the peak exposure level was far below 100 ppm. There was no known benzene in major raw materials and ingredients in the semiconductor industry. We can assume that benzene could be generated from chemical interaction of other chemicals, however, this may result in very low levels. Therefore, accidental high levels of exposure to benzene is very unlikely. In addition, two cases (Case No. 1 and 7) had exposure durations of less than two years, and LHP cancer developed 9 to 10 years after leaving the company in two other cases (Case No. 5 and 6).

br Discussion Several factors prompted the need

Several factors prompted the need for a pharmacoeconomic evaluation of IC and MEM. These included an institutional review of antimicrobial restriction and concerns about usage and costs. Most importantly, interchanging MEM with IC was thought to be able to lead to a cost saving of more than two million Saudi Riyals, as the acquisition costs of IC were noted to be less than those for MEM (SAR70.4 versus SAR 151.26 per vial). In addition, published pharmacoeconomic evaluations are limited in Saudi Arabia (Al Aqeel and Al-Sultan, 2012). To our knowledge, no published pharmacoeconomic evaluations comparing IC and MEM in adult patients have been conducted in Saudi Arabia. There have been several international Q-VD(OMe)-OPh pharmacoeconomic evaluations done (Attanasio et al., 2000; Edwards et al., 2006; Badia et al., 1999), but with conflicting results. Using data based on the local perspective therefore had the potential to provide insight into the factors influencing local practice and medicines selection. Government institutions in Saudi Arabia, providing free medical treatment, may adopt similar costing strategies that are unique to this Q-VD(OMe)-OPh region .
At a dose of 500mg q6h (cost=SAR 281.60 per day), IC is an attractive alternative to MEM 1 gram q8h (cost=SAR 453.78 per day), particularly in mild to moderate infections.
The overall ADEs were not significantly different between the groups. It was found, though, that ADEs were under-reported. Although more patients had gastrointestinal ADEs in the MEM group, this was not significantly different when compared to IC. These were mainly antibiotic-associated diarrhoea, resulting in C. difficile culture being taken. One patient on IC experienced a seizure. Concern about this adverse effect has prompted the avoidance of IC among health care workers in our hospital. It must be pointed out that Hoffman et al. found no difference in seizure rates between patients treated with IC and MEM (Hoffman et al., 2009). These authors noted that elderly patients, patients with low body weight, at risk of CNS disease, those with a history of seizure and those with renal dysfunction appear to be at increased risk of drug-related seizures. On this basis, the patient in our cohort who developed seizures was at increased risk. This study excluded patients with bacterial meningitis, due to this population being at risk for seizures. In addition, our hospital guidelines (MNGHA, 2012) do not advocate the use of IC in those at increased risk of seizures and in patients with poor renal function. Our study, in agreement with Hoffman et al. Hoffman et al. (2009) did not show significant differences in ADEs associated with IC or MEM.
Total hospital days, especially the total CCU days, in the IC group were significantly higher. The longer CCU days were believed to influence costing, especially in the IC group. Patients varied significantly in regard to the number of CCU days. Independent sample t-tests showed no significant difference in terms of mean daily hospital costs and step-down costs. However the GW costs in the IC group was significantly lower in the IC group compared to MEM (p=0.016). Although total CCU costs were higher, cost per day was not statistically different between the two groups, except in terms of the GW days. More patients in the MEM group spent a greater number of days in the GW unit, which drove up mean costs in this group.
The mean total daily costs of vials in the IC group were much lower than in the MEM group (SAR 250.63 vs. 393.48). This was expected, as the cost of IC, given 4 times daily, would result in daily costs of SAR 281.60 versus MEM, given 3 times daily, at SAR 453.78. The mean costs in our study were mean costs reflecting dose changes as well. In the institution, a previous unpublished study showed that this difference in acquisition costs could result in a savings of more than SAR 2 million riyals per year, if IC was used instead of MEM. This makes IC an attractive choice as a carbapenem in patients with moderate to severe infections. Despite significant differences in acquisition costs, laboratory culture costs, pharmacist and pharmacy aide costs, the total average costs per day was not significantly different between the 2 groups (SAR 4784.46 IC and SAR 4390.13 MEM, p=0.370).

Finally in the same four patients

Finally, in the same four patients, the IVC diameter was simultaneously estimated in three different sections by means of three corresponding moving M-lines controlled by the same algorithm. The results are illustrated in Figure 9. Note that the time course of IVC diameter varies in different sections in the same subject, indicating that different portions of the IVC exhibit different pulsatility. As a consequence, the CIs estimated in the three positions are characterized by great variability, although with some difference among the four patients.

The assessment of IVC dimensions on the basis of US scans is potentially affected by movements of the vein, which occur mainly in the craniocaudal direction, as a result of respiratory activity (Blehar et al. 2012). To avoid respiration-related movements, in a recent study the recordings were performed on patients maintaining a short apnea, thus limiting analysis of oscillations to the cardiac component (Nakamura et al. 2013). Another possibility is to manually make single measurements at the same location using B-mode cine loops rather than M-mode (Lyon et al. 2005; Wallace et al. 2010). However, this approach, based on discrete measurements, cannot provide continuous monitoring of IVC diameter.
In this study, an image processing algorithm is described that operates on longitudinal scans of the IVC (B-mode clip) and, based on reference points arbitrarily selected by the user, is able to follow IVC displacement in time. In this way, diameter changes of a given IVC section can be monitored irrespective of IVC longitudinal movements, and the analysis can be oriented to both cardiac- and respiratory-related oscillations. The algorithm was validated in a series of simulations in which translation, rotation, distortion of the IVC and additive noise were progressively implemented. The results indicated the following. (i) Simulated displacement of the IVC generated, in fact, movement artifacts in diameter estimation performed along a fixed M-line (as provided by most commercial devices operating in M-mode). (ii) The algorithm is able to eliminate most of the movement artifacts by performing the measurement along the moving M-line, which follows the IVC displacements. (iii) Relatively small errors expressed in terms of the average RMS of measured-simulated diameter may, however, result in large changes in the estimated CI. (iv) The movement related errors depend not only on the extent of the movement, but also on the individual longitudinal IVC profile, as well as on the chosen M-line (compare subject 2 in Figs. 7 and 8).


The increasing life span in industrial countries leads to an aging population, and thus, disorders of old age become more relevant. Osteoporosis is the most prevalent metabolic bone disease and one of the most frequent diseases in the elderly Q-VD(OMe)-OPh (Jones et al. 1994; Warming et al. 2002). It is characterized by bone loss and, hence, increased risk of fractures, which can cause immobilization (Hall et al. 1999) and increased mortality (Center et al. 1999). Because a large proportion of the population (about 39% of the women older than 50 in Germany [Häussler et al. 2007]) have osteoporosis, high-quality diagnosis, prognosis and therapy monitoring are essential. Today, the clinical gold standard for these tasks is the assessment of areal bone mineral density (aBMD) of the hip and spine by dual X-ray absorptiometry (DXA).
Dual X-ray absorptiometry of the spine or hip and quantitative ultrasound (QUS) measurements at the heel permit estimation of osteoporotic fracture risk with comparable performance (Bauer et al. 2007; Hans et al. 1996; Pinheiro et al. 2006). Therapy monitoring is also relevant, but here the performance of QUS methods remains controversial (Blake and Fogelman 2007; Krieg et al. 2008). DXA monitoring of the spine (Faulkner 1998) is more sensitive compared with that of the hip, as the vertebrae consist mainly of cancellous bone, which is more responsive to changes in bone metabolism because of the larger surface compared with cortical bone. However, degenerative changes of the vertebrae—that is, calcifications, especially those on or near the outer surface of the cortex of the vertebrae—are a major source of error, which limits the potential to use spinal DXA for monitoring (Guglielmi et al. 2005). In addition, drugs such as bisphosphonate minimize fracture risk even if no or only minimal changes in aBMD are measured (Chapurlat et al. 2005; Watts et al. 2004). In contrast to DXA, ultrasound, as a mechanical wave, is influenced by other aspects and is related to bone microstructure and stiffness (Goossens et al. 2008; Hodgskinson et al. 1997; Nicholson et al. 1998). Nevertheless, it is unclear if ultrasound measurements can yield more information about the development of fracture risk during therapy. Some longitudinal studies of different therapies did not prove the feasibility of using QUS for monitoring (Frost et al. 2001; Gonnelli et al. 2002, 2006; Sahota et al. 2000). One contributing factor might be the poor (long-term) precision of current QUS devices as criticized by the International Society for Clinical Densitometry (ISCD) (Krieg et al. 2008). Furthermore, adequate studies reporting the ability of QUS to monitor therapy are lacking, although the ISCD recognized some evidence of this potential. QUS at the heel seems to be better suited for this task than QUS assessments at other sites (e.g., radius and phalanges) because of a stronger response to anti-resorptive treatment of the QUS parameters at this site (Krieg et al. 2008). The common QUS parameters measured by commercial devices at the heel include the apparent speed of sound (SOS) and broadband ultrasound attenuation (BUA). Gonnelli et al. (2002) reported a five times higher monitoring time interval for BUA compared with SOS in monitoring bisphosphonate treatment. Therefore, in this study we focused on SOS as the more sensitive parameter of changes in bone status. The sensitivity to detect changes is related to responsiveness (i.e., changes in the reading of the given method over a specific interval) and the long-term precision error (Glüer 1999). Thus, increases in sensitivity may be obtained by decreasing errors in precision. Common error sources in QUS include repositioning of the foot along with the placement of the region of interest (ROI) and the temperature of the coupling medium as well as of the soft tissue (Njeh et al. 1999). The device developed in our lab (foot ultrasound scanner [FUS]) was constructed using innovative design features with the aim of minimizing the impact of these issues by using an ultrasound array to generate an image, mechanics for adjusting the ultrasound incident beam angle, temperature stabilization of the coupling medium and a sensor to measure the temperature of the foot. The FUS was built and tested to assess whether achievement of substantial improvements in SOS mid-term precision compared with commercially available devices is feasible and to estimate the impact of the design features introduced on the precision of calcaneal QUS.