FER (Phospho-Tyr402) ELISA Kit (DEIA-XYA651)

Regulatory status: For research use only, not for use in diagnostic procedures.

Write a review

2 x 96T
cultured cells
Species Reactivity
Human, Mouse
Intended Use
The FER (Phospho-Tyr402) Cell-Based ELISA Kit is a convenient, lysate-free, high throughput and sensitive assay kit that can monitor FER protein phosphorylation and expression profile in cells. The kit can be used for measuring the relative amounts of phosphorylated FER in cultured cells as well as screening for the effects that various treatments, inhibitors (ie. siRNA or chemicals), or activators have on FER phosphorylation.
Contents of Kit
1. 96-Well Cell Culture Clear-Bottom Microplate: 2 plates
2. 10x TBS: 24 mL
3. Quenching Buffer: 24 mL
4. Blocking Buffer: 50 mL
5. 10x Wash Buffer: 50 mL
6. 100x Anti-FER (Phospho-Tyr402) Antibody (Rabbit Polyclonal): 60 μL, red
7. 100x Anti-FER Antibody (Rabbit Polyclonal): 60 μL, purple
8. 100x Anti-GAPDH Antibody (Mouse Monoclonal): 60 μL, green
9. HRP-Conjugated Anti-Rabbit IgG Antibody: 12 mL, glass
10. HRP-Conjugated Anti-Mouse IgG Antibody: 12 mL, glass
11. Primary Antibody Diluent: 12 mL
12. Ready-to-Use Substrate: 12 mL
13. Stop Solution: 12 mL
14. Crystal Violet Solution: 12 mL
15. SDS Solution: 24 mL
16. Adhesive Plate Seals: 4 seals
4°C/6 Months


Have you cited DEIA-XYA651 in a publication? Let us know and earn a reward for your research.

Customer Reviews

Write a review, share your experiences with others and get rewarded !
Product Name Cat. No. Applications Host Species Datasheet Price Add to Basket
Product Name Cat. No. Applications Host Species Datasheet Price Add to Basket


Facial expression recognition boosted by soft label with a diverse ensemble


Authors: Gan, Yanling; Chen, Jingying; Xu, Luhui

Facial expression recognition (FER) has recently attracted increasing attention with its growing applications in human-computer interaction and other fields. But a well-performing convolutional neural network (CNN) model learned using hard label/single-emotion label supervision may not obtain optimal performance in real-life applications because captured facial images usually exhibit expression as a mixture of multiple emotions instead of a single emotion. To address this problem, this paper presents a novel FER framework using a CNN and soft label that associates multiple emotions with each expression. In this framework, the soft label is obtained using a proposed constructor, which mainly involves two steps: (1) training a CNN model on a training set using hard label supervision; (2) fusing the latent label probability distribution predicted by the trained model to obtain soft labels. To improve the generalization performance of the ensemble classifier, we propose a novel label-level perturbation strategy to train multiple base classifiers with diversity. Experiments have been carried out on 3 publicly available databases: FER-2013, SFEW and RAF. The results indicate that our method achieves competitive or even better performance (FER-2013: 73.73%, SFEW: 55.73%, RAF: 86.31%) compared to state-of-the-art methods. (C) 2019 Published by Elsevier B.V.

Multiple Attention Network for Facial Expression Recognition


Authors: Gan, Yanling; Chen, Jingying; Yang, Zongkai; Xu, Luhui

One key challenge in facial expression recognition (FER) is the extraction of discriminative features from critical facial regions. Because of their promising ability to learn discriminative features, visual attention mechanisms are increasingly used to address pattern recognition problems. This paper presents a novel multiple attention network that simulates humans & x2019; coarse-to-fine visual attention to improve expression recognition performance. In the proposed network, a region-aware sub-net (RASnet) learns binary masks for locating expression-related critical regions with coarse-to-fine granularity levels and an expression recognition sub-net (ERSnet) with a multiple attention (MA) block learns comprehensive discriminative features. Embedded in the convolutional layers, the MA block fuses diversified attention using the learned masks from the RASnet. The MA block contains a hybrid attention branch with a series of sub-branches, where each sub-branch provides region-specific attention. To explore the complementary benefits of diversified attention, the MA block also has a weight learning branch that adaptively learns the contributions of the different critical regions. Experiments have been carried out on two publicly available databases, RAF and CK & x002B;, and the reported accuracies are 85.69 & x0025; and 96.28 & x0025;, respectively. The results indicate that our method achieves competitive or better performance than state-of-the-art methods.

Online Inquiry

Phone: *
E-mail Address: *
Technology Interest:
Type of Organization:
Service & Products Interested: *
Project Description:

Related Products

Related Resources

Ordering Information

Payment methods we support:
Invoice / Purchase Order
Credit card

Inquiry Basket