The Infosys performance appraisal is the single most consequential recurring event in an Infosys employee’s career. It determines salary increments, variable pay, promotion eligibility, project assignments, and ultimately whether the career trajectory accelerates, plateaus, or begins a decline. Yet most employees navigate it reactively: filling out the self-assessment form in two hours the night before it is due, accepting whatever rating the manager communicates, and spending the rest of the year vaguely hoping for better results next time.

Infosys Performance Appraisal Deep Dive

This guide is for the employee who wants to understand and navigate the process deliberately. It covers every stage of the Infosys appraisal cycle in depth: how the iRace performance management system works, how to set goals that produce strong evidence at year-end, how to write a self-assessment that actually moves your rating, how the calibration process works and what happens inside it, how forced distribution affects every employee regardless of individual performance, what managers are actually evaluating during the review, how to have the appraisal conversation productively, what the appeal process involves and when it makes sense to use it, and how the rating translates to promotion, increment, and career outcomes.

The appraisal system is not designed to be opaque; it is designed to assess performance against organizational goals. Employees who understand the design and engage with it deliberately consistently outperform those who treat it as an administrative obligation.


Table of Contents

  1. The Infosys Appraisal System: Overview and Philosophy
  2. The iRace Platform: What It Is and How It Works
  3. Goal Setting: The Foundation of the Entire Appraisal
  4. The Mid-Year Review: More Than a Check-In
  5. The Year-End Self-Assessment: How to Write It Well
  6. The Manager’s Assessment: What They Actually Evaluate
  7. The Calibration Process: What Happens in the Room
  8. Forced Distribution: Understanding the Bell Curve
  9. The Appraisal Discussion: Having the Conversation Well
  10. 360-Degree Feedback: How It Works and Matters
  11. The Appeal Process: When and How to Use It
  12. How Ratings Connect to Promotion, Increment, and Career
  13. Performance Improvement Plans: What They Are and How to Exit One
  14. Common Appraisal Mistakes and How to Avoid Them
  15. Frequently Asked Questions

The Infosys Appraisal System: Overview and Philosophy

The Purpose of the Appraisal:

The Infosys performance appraisal serves three distinct organizational functions simultaneously. First, it determines the distribution of compensation increases: higher-rated employees receive higher increments and variable pay, which is the mechanism by which Infosys rewards strong performance financially. Second, it identifies employees for promotion consideration: the promotion pipeline is fed by employees who have demonstrated sustained strong performance across multiple appraisal cycles. Third, it provides a structured feedback mechanism: the appraisal discussion communicates where the employee stands, what is being done well, and what needs development.

These three functions create the structure of the process: goal-setting and self-assessment feed the compensation and promotion decisions, while the appraisal discussion delivers the feedback function.

The Connection to Business Performance:

Infosys’s appraisal system is not disconnected from business performance. The variable pay multipliers (described in the Salary Hike guide) connect individual ratings to company and unit performance. An exceptional individual rating in a year of poor business performance still produces less total compensation than a good rating in a year of strong business performance, because the company multiplier reduces the variable pay across all bands.

This means the appraisal must be understood in two parts: the relative assessment (how do you rank among your peers in the calibration pool) which determines the band, and the absolute business performance (how did Infosys and your unit do this year) which determines the total financial outcome within that band.

The Manager’s Role:

The manager is the central human actor in the appraisal process. They observe the employee’s daily work throughout the year, provide the formal assessment, advocate for the employee in calibration, conduct the appraisal discussion, and sign off on promotion recommendations. The quality of this relationship, and the quality of the evidence the employee provides the manager, is the primary variable that the employee can control in the appraisal process.

The HR Role:

HR facilitates the process: managing the iRace platform, running the calibration sessions, ensuring the forced distribution is applied, processing the increment letters, and handling any grievances or appeals that arise from the appraisal. HR does not independently assess individual performance; they set and enforce the process rules within which managers and business unit leaders make the assessments.


The iRace Platform: What It Is and How It Works

iRace (or its current equivalent in Infosys’s HR system) is the internal performance management platform through which all appraisal-related activities are conducted. Understanding its structure removes confusion about where things happen and what is recorded where.

Platform Access:

iRace is accessed through the InfyMe portal. The appraisal module is typically active during specific windows: goal-setting in April, mid-year check-in in September-October, and year-end assessment in January-February. Outside these windows, the module may be in view-only mode.

The Goal Repository:

The platform maintains a repository of goals for each employee across the appraisal year. The goals entered in April persist through the year and are the basis for the year-end assessment. Any goal modifications agreed mid-year (due to project changes or role shifts) should be updated in the platform, not just discussed verbally.

The Assessment Forms:

The year-end assessment involves two separate form completions: the employee’s self-assessment and the manager’s assessment. The employee completes their form first, rating themselves against each goal dimension and providing evidence. The manager then views the employee’s self-assessment before completing their own assessment, which may agree with or differ from the employee’s self-rating.

The Visibility Structure:

The employee can see: their own goals, their own self-assessment, and the manager’s final assessment after it is formally communicated. The employee cannot see: the calibration session deliberations, other employees’ assessments, or the manager’s initial assessment before calibration adjustments.

The manager can see: all of their direct reports’ self-assessments, the team’s overall rating distribution as submitted, and the distribution requirements from HR.

Senior management and HR can see: aggregate rating distributions across the unit, flagged cases where individual managers are deviating significantly from the required distribution.

Notification and Deadline Management:

The iRace platform sends automated notifications at each stage: when goal-setting opens, when the mid-year check-in is due, when the year-end assessment window opens, and when the deadline approaches. Missing these deadlines creates process complications and signals disengagement. Calendar the appraisal windows from the start of the financial year.


Goal Setting: The Foundation of the Entire Appraisal

The goals set in April are the direct basis for the year-end assessment. This is the most underappreciated leverage point in the entire appraisal process. Employees who set strong goals in April are building the foundation for a strong appraisal ten months later.

The SMART Goal Framework in the Infosys Context:

The conventional SMART (Specific, Measurable, Achievable, Relevant, Time-bound) framework applies directly to Infosys goal-setting, with the caveat that “achievable” goals at Band 3 level and “stretch” goals that require genuine effort to reach Band 2 level should both be represented.

Poor goal example: “Improve technical skills and contribute effectively to the project.”

This goal cannot be measured, has no defined success criteria, and produces no evidence at year-end beyond “I worked on the project.” It makes the self-assessment difficult and the manager’s calibration argument impossible.

Strong goal example: “Complete the AWS Solutions Architect Associate certification by August 31. Deliver the order management module’s three assigned user stories in Sprint 3, 4, and 5 with zero severity-1 defects. Mentor two new joiners through their first sprint, reducing their assigned task completion time to within 20 percent of the team average by their second sprint.”

This goal is specific (AWS SAA, three user stories, two new joiners), measurable (certification completion, defect count, sprint completion time), has clear deadlines, and produces concrete evidence at year-end.

Goal Alignment with Manager:

Before submitting goals in the platform, review them with the manager and get explicit agreement. A manager who has agreed to specific goals cannot later say the goals were insufficient. The agreed goals are the performance contract for the year.

The goal alignment conversation should cover: “These are the goals I am proposing. Do they reflect what you would consider Band 2 performance if fully achieved? What would I need to add or change to make them more aligned with what the team needs this year?”

The Right Number of Goals:

Most Infosys appraisal forms have a defined number of goal dimensions (typically four to six categories: technical delivery, quality, learning, communication, teamwork, and initiative or innovation). One substantive goal per dimension is the right calibration: comprehensive enough to cover all dimensions, focused enough that each goal can be achieved and evidenced.

Do not set twelve goals hoping to hit six. The assessment is holistic, and a self-assessment that demonstrates completion of six strong goals is more compelling than one that describes partial completion of twelve.

Updating Goals Mid-Year:

If project circumstances change significantly mid-year (project ends, new client, team restructuring), the original goals may no longer be relevant. In these cases, update the goals in the iRace platform in consultation with the manager. Unchanged goals that are no longer achievable due to circumstances outside the employee’s control should be documented with the reason for the change.


The Mid-Year Review: More Than a Check-In

The mid-year review is often treated as an administrative obligation: fill in the form quickly, have a brief conversation, move on. This is a missed opportunity.

What the Mid-Year Review Actually Provides:

The mid-year review creates a documented record of performance at the halfway point. This record serves two purposes: it provides the manager with documented evidence of contributions that occurred early in the year (which may be forgotten by January without documentation), and it creates an opportunity to course-correct before the year-end assessment.

If the first half of the year went well, the mid-year review is the moment to document that explicitly. “The order management module was delivered on time in Sprint 3 and Sprint 4 with no production defects” written in the mid-year platform entry ensures this contribution is preserved in the record even if the second half involves a project transition.

If the first half did not go well, the mid-year review is the moment to identify specifically what changed. “I did not complete the AWS certification as planned due to the extended client requirements phase in Q1. I have rescheduled the target to October 31 and have completed 40 percent of the study material.” This documented acknowledgment and revised plan is better for the appraisal record than silence followed by an explanation at year-end.

The Mid-Year Conversation:

The mid-year conversation with the manager should cover: progress against each goal, any goals that need to be revised due to circumstance changes, feedback on what is working and what needs attention, and the manager’s current assessment of trajectory.

The direct question at the mid-year review: “Based on where I am at the halfway point, what would I need to do in the second half to achieve a Band 2 rating?” This question is professional, forward-looking, and gives the manager the opportunity to provide specific guidance rather than vague encouragement.

When Mid-Year Is the Warning Signal:

If the manager indicates at the mid-year review that performance is tracking toward Band 4, the six months between the mid-year and the year-end assessment is the window to change that trajectory. Band 4 is recoverable within a single appraisal year if the performance gap is identified and addressed specifically in the second half.

The worst outcome is receiving a Band 4 at year-end as a surprise. Managers who communicate early that there is a performance concern give the employee the opportunity to address it. If your mid-year signals anything below “on track for Band 3,” treat it seriously and ask for specific guidance on what change is needed.


The Year-End Self-Assessment: How to Write It Well

The year-end self-assessment is the most impactful single document in the appraisal process. It is the employee’s opportunity to present their year’s work as a complete, evidenced narrative. Managers who receive a strong self-assessment have the material they need to advocate effectively in calibration.

The Structure of a Strong Self-Assessment:

For each goal dimension (technical delivery, quality, learning, communication, teamwork, initiative), write:

  1. What the goal was (state it precisely).
  2. What you achieved against it (be specific and quantified where possible).
  3. The impact on the project, team, or client.
  4. Evidence that can be verified (sprint records, certification names, client communication dates, defect counts).

This four-part structure produces entries that are defensible in calibration because each claim is supported by specific, verifiable evidence.

The Evidence Quality Spectrum:

Weak evidence: “I worked hard and contributed effectively.” Average evidence: “I delivered several user stories and supported the team.” Strong evidence: “I delivered 14 user stories across Q3 and Q4 totaling 89 story points. The customer data migration module I led was signed off by the client on November 8 with no post-migration defects. The client project lead mentioned the migration in the November Sprint Review as ‘the smoothest data migration they had seen from any vendor.’”

The difference between strong and average evidence is not the quality of the work; it is the specificity and verifiability of the description.

Quantification Strategies:

For delivery goals: story points completed, user stories delivered, sprints participated in, features shipped. For quality goals: defect rates on your module vs project average, number of production incidents attributed to your code, test coverage percentage on your deliverables. For learning goals: certification names and dates, Lex hours completed, courses finished. For communication goals: number of client-facing presentations, number of meeting facilitation instances, positive client feedback mentions. For teamwork and mentoring goals: number of people mentored, hours of pair programming, knowledge-sharing sessions conducted. For initiative goals: internal tools or documents created, process improvements proposed and implemented, participation in innovation challenges.

Addressing Gaps Honestly:

If a goal was not fully achieved, the self-assessment should acknowledge this honestly with context: “The cloud certification was not completed as planned due to the client-side incident in Q3 that required extended on-call support for six weeks. I have completed 70 percent of the certification content and have rescheduled the exam for Q1 of the new year.”

An honest, contextualized acknowledgment of a gap is better than ignoring the goal or providing a vague “partially completed” without explanation. The manager knows whether the goal was completed; the question is whether you have an honest explanation.

The Self-Rating:

Most iRace forms ask the employee to rate themselves in each dimension (typically: Exceptional, Exceeds, Meets, Partially Meets). The self-rating should be honest and defensible.

Do not rate yourself all Exceptional in every dimension; this is not credible and reduces the impact of the self-assessment. Rate yourself Exceptional in the dimensions where the evidence genuinely supports it (a specific measurable achievement that was significantly beyond what was expected). Rate yourself Exceeds in dimensions where you met the goal and went meaningfully beyond it. Rate yourself Meets where you delivered what was required without notable stretch.

A self-assessment that rates the employee as “Exceeds” in technical delivery, “Meets” in communication, and “Exceeds” in learning is more credible than one that rates every dimension as “Exceptional.”

The Length Question:

Self-assessments that are too short (one or two sentences per goal dimension) do not provide adequate evidence. Self-assessments that are excessively long (five paragraphs per dimension) become difficult to use quickly in calibration. The sweet spot: two to four substantive sentences per dimension that include at least one specific quantified evidence point and one impact statement.

The Submission Deadline:

Submit the self-assessment well before the deadline. Last-minute submissions signal low engagement and give the manager less time to review before calibration. Aim to submit at least one week before the formal deadline, after reviewing it carefully one more time.


The Manager’s Assessment: What They Actually Evaluate

Understanding what the manager is assessing - and how that assessment is formed - helps employees provide the right evidence throughout the year.

The Assessment Dimensions:

Infosys’s performance dimensions vary slightly by business unit and appraisal cycle version, but consistently include:

Technical/Functional Delivery: the quality, timeliness, and completeness of the actual work produced. This is the primary dimension for engineering and delivery roles. The evidence base is: sprint completion rates, code review outcomes, defect rates, and client acceptance of deliverables.

Communication and Collaboration: the quality of written and verbal communication with team members, managers, and clients. The evidence base includes: meeting participation quality, written communication clarity, client feedback, and responsiveness to team requests.

Learning and Development: skills developed, certifications earned, and demonstrated application of new knowledge. The evidence base is: Lex records, certification completion, application of new skills in actual project work.

Initiative and Innovation: contributions beyond the assigned role scope, process improvements, internal tool development, and voluntary contributions to the team or organization. This dimension often differentiates Band 2 from Band 3.

Teamwork and People Development: contribution to team cohesion, mentoring of junior members, and participation in team activities. Senior employees are assessed on team development more heavily than freshers.

The Observation Problem:

Most managers do not keep running records of every team member’s daily contributions. By January, the memory of specific contributions from April and May is genuinely faded. The employee’s self-assessment is often the primary source of information the manager uses when they cannot independently recall specific details.

This is why the self-assessment quality matters so much: in many cases, the manager’s assessment is substantially derived from the employee’s self-assessment, supplemented by the manager’s own recollections of significant events. If the employee’s self-assessment is vague, the manager’s assessment has limited specific evidence to work with.

The Memory Bias Toward Recent Events:

Human memory has a well-documented recency bias: more recent events are more easily recalled than earlier ones. For performance assessment, this means that delivery in November and December is remembered more vividly than delivery in April and May.

Employees should be aware of this bias and actively counter it. Delivering strongly in the second half of the appraisal year (the “sprint to the finish”) is genuinely important for memory-based assessments. But it should not come at the expense of the first half; consistency across the year produces stronger evidence than uneven performance that peaks at the end.

What Managers Notice That Is Hard to Document:

Beyond the formal evidence, managers observe qualities that are difficult to quantify but that influence ratings: the proactiveness with which the employee takes ownership of issues rather than waiting to be told what to do, the quality of judgment shown when uncertain situations arise, the attitude toward constructive feedback (does the employee take it seriously or become defensive?), and the reliability of commitments (does “I’ll have this done by Thursday” always mean Thursday or sometimes mean the following Monday?).

These softer dimensions influence calibration advocacy: a manager who observes genuine ownership, good judgment, receptiveness to feedback, and reliable commitment-keeping advocates more strongly than one who observes the opposite, even if the formal delivery metrics are similar.


The Calibration Process: What Happens in the Room

Calibration is the mechanism that turns individual manager assessments into final ratings. It is the most consequential step that employees have the least direct visibility into.

The Calibration Structure:

Calibration sessions are organized hierarchically. In a typical Infosys delivery unit:

Level 1 Calibration: a senior manager or delivery manager reviews the ratings submitted by all the team leads and technology leads in their group. The group’s ratings are compared against the required distribution curve. Ratings that do not fit the distribution are challenged and potentially revised.

Level 2 Calibration: a business unit head reviews the ratings across delivery managers to ensure consistency of standards across the unit. Cross-team comparisons are made. Ratings that appear inconsistent with peers at the same level across different teams are questioned.

HR Calibration Review: HR reviews the final distribution to confirm it meets the organizational requirements. Units with distributions significantly deviating from the mandated curve are asked to recalibrate.

What Happens When Ratings Do Not Fit the Curve:

If a manager submits ratings for their team that would, in aggregate, require too many Band 1 and Band 2 ratings to be approved by the distribution, the senior manager challenges specific ratings in the calibration session. The manager must defend each Band 1 and Band 2 rating with specific evidence.

The question in calibration is not “is this person good?” but “is this person demonstrably better than the others competing for the limited Band 1 and Band 2 slots in this group?” This relative comparison is what the forced distribution enforces.

The Manager’s Advocacy Role:

A manager who can say in calibration: “Priya delivered six sprints on time, was cited by name in client feedback twice, completed two professional certifications, and her module had the lowest defect rate in the project” has a compelling case for Band 2.

A manager who says “Priya worked very hard this year and is a valued team member” does not have a compelling case for Band 2, because that description applies to many people in the room.

The difference between these two managers is not the quality of their judgment; it is the quality of the evidence they have access to. The evidence quality comes from the employee’s self-assessment and from specific events that the manager observed and recorded. This is why the self-assessment and the throughout-the-year evidence building directly affect calibration outcomes.

What Cannot Be Changed After Calibration:

Once a rating is finalized through the calibration hierarchy, it is locked in the HR system. The employee’s manager typically cannot unilaterally change it without going back through the calibration chain, which is a difficult and rarely approved process. This is why pre-calibration advocacy (the conversation with the manager before the calibration window) is so much more effective than post-calibration complaints.

The Timeline:

Calibration typically happens in February, approximately three to four weeks after the assessment window opens and employees submit their self-assessments. The manager communicates the final rating to the employee after calibration is complete, typically in March.


Forced Distribution: Understanding the Bell Curve

Forced distribution is the most emotionally difficult aspect of the Infosys appraisal system to accept and the most important to understand. It affects every employee regardless of absolute performance level.

The Basic Mechanics:

Infosys (like most large organizations) requires that ratings across a defined calibration pool follow a predetermined distribution. A typical distribution requirement:

Band 1 (Exceptional): approximately 5 to 10 percent of the pool Band 2 (Exceeds): approximately 15 to 25 percent of the pool Band 3 (Meets): approximately 40 to 55 percent of the pool Band 4 (Partially Meets): approximately 10 to 20 percent of the pool Band 5 (Does Not Meet): approximately 5 percent of the pool

These are organizational targets, not rigid quotas for every individual team, but the distribution across any large calibration group must approximate these proportions. This means that in a calibration group of twenty engineers, only one or two can receive Band 1, three to five can receive Band 2, eight to eleven will receive Band 3, and two to four will receive Band 4 regardless of absolute performance.

The Implication for High-Performing Teams:

In a team where all ten members are genuinely strong performers, the forced distribution still requires some of them to receive Band 4. The Band 4 in this context means “relatively weakest in this strong team” not “below minimum acceptable performance.” This distinction is important but not always communicated clearly to employees who receive Band 4 in such contexts.

The professional response to a Band 4 in a high-performing team is not distress but curiosity: “I understand I am in the Band 4 tier for this calibration pool. Can you help me understand what specifically I would need to do differently to be in Band 3 next year?” This question treats the rating as actionable feedback rather than a judgment.

The Implication for Being in a Below-Average Team:

The reverse is also true: being the strongest performer in a below-average team produces a Band 2 or Band 1 more easily than being a strong performer in a high-performing team. The relative position matters as much as absolute performance.

This dynamic means that project assignment affects appraisal outcomes. Engineers deployed on projects with high average performance levels face stiffer competition for the upper bands than those deployed on projects with lower average performance. This is not entirely within the engineer’s control, but it is worth being aware of.

The Organizational Justification for Forced Distribution:

Infosys and most large companies use forced distribution for specific organizational reasons. Without it, there is a documented tendency for managers to rate most of their team members as “good” to avoid difficult feedback conversations, resulting in rating inflation that makes differentiation between genuine high performers and average performers impossible. Forced distribution forces the differentiation.

The system has real flaws: it can be demotivating for genuinely strong performers who are placed in Band 4 due to pool composition rather than individual shortcoming. But understanding why it exists makes it easier to navigate than resisting it as arbitrary.

How to Think About Your Calibration Pool:

Who is in your calibration pool? Typically, the pool consists of employees at the same designation level within the same delivery unit or sub-unit. Understanding who you are being compared against is the first step in understanding your competitive position.

If you are an SSE in a unit with forty SSEs, you are competing for the available Band 1 and Band 2 slots among forty people at your level. Knowing this changes the strategy: the question is not “did I do my job well?” but “did I do my job demonstrably better than most of the thirty-nine other SSEs in this pool?”


The Appraisal Discussion: Having the Conversation Well

The appraisal discussion with the manager is the moment when the rating is communicated, the evidence is reviewed, and development feedback is provided. How this conversation is conducted significantly affects both the current year’s outcome and the following year’s trajectory.

Before the Discussion:

Prepare by reviewing your self-assessment and the goals you set in April. Know your case: what did you deliver, what evidence supports each goal dimension, and where are the genuine gaps? Being prepared means you can engage specifically rather than reacting emotionally to the rating communication.

Also prepare emotionally: the rating may be lower than you expected. Receiving this calmly and asking productive questions produces better outcomes than an emotionally reactive response that makes the manager defensive.

During the Discussion:

When the manager communicates the rating, listen to the full explanation before responding. The manager is obligated to explain the rating and provide development feedback. Hear it completely.

If the rating is what you expected or better: acknowledge the positive elements and ask specifically about development areas. “Thank you for this feedback. Based on the Band 2 rating, what would make a Band 1 case for me next year? What is the one area where I have the most room to grow?”

If the rating is lower than expected: ask for specific evidence. “I want to understand this fully so I can improve. Can you help me understand what specifically, in the evidence you had available, produced the [Band 4] outcome rather than [Band 3]?” This question is professional and forward-looking. It does not challenge the rating; it asks for specific development information.

The Development Conversation:

Every appraisal discussion should include a development plan for the next year: what skills to build, what behaviors to change, what contributions to make. An employee who leaves the appraisal discussion with a vague “keep doing what you’re doing” has not received actionable feedback.

Push for specifics: “What would the strongest Band 2 performance look like in the next appraisal cycle for someone at my level and on this type of project?” This question asks the manager to describe the target state specifically, which is the information you need to aim at it.

What Not to Do:

Do not challenge the rating as unfair in the appraisal discussion. Even if you genuinely believe it is unfair, the discussion is not the mechanism for changing it. The calibration is already locked. The appropriate response is asking for evidence and development information, not contesting the outcome in the moment.

Do not compare yourself to named colleagues. “But Rahul got Band 2 and he doesn’t work as hard as I do” is unprofessional, almost never productive, and damages the relationship with the manager without changing the rating.

Do not bring a competing external offer to the appraisal discussion unless the intent is actually to use it as a retention conversation. Using an external offer as emotional leverage in a rating discussion rarely produces the intended effect.


360-Degree Feedback: How It Works and Matters

In some Infosys appraisal cycles and for some employee levels, a 360-degree feedback process supplements the manager’s assessment with input from peers, cross-functional contacts, and in some cases direct reports (for managers).

Who Provides 360 Feedback:

The employee typically nominates a list of feedback providers who are approved by the manager. The list usually includes: two to four peer engineers who work directly on the same project, one or two cross-functional contacts (business analysts, project managers, or testers from the same project), and any stakeholders who can speak to the employee’s specific contributions.

The nomination process is an opportunity to select people who have direct visibility into your strongest contributions. This is not manipulation; it is ensuring that the feedback comes from people who have actual observations rather than peripheral awareness.

The Feedback Questions:

360-degree feedback forms typically ask responders to rate the employee on the same dimensions as the main appraisal (technical quality, communication, teamwork, initiative) and to provide a brief narrative comment. The numerical ratings aggregate into a 360 score that the manager can reference. The narrative comments are often more useful to the manager than the numerical scores.

How 360 Feedback Affects the Rating:

360 feedback is an input to the manager’s assessment, not an independent determinant of the rating. Uniformly positive 360 feedback from people who have direct visibility into the employee’s work strengthens the manager’s case in calibration. Negative 360 feedback, particularly if it mentions specific incidents, creates challenges in the calibration argument.

Building 360 Evidence Throughout the Year:

The feedback that colleagues can provide in December is built by the working relationships formed throughout the year. Engineers who are helpful to teammates, who communicate clearly, who reliably complete peer code reviews, and who contribute to knowledge-sharing receive genuinely positive 360 feedback because their colleagues have direct positive experiences to reference.

This is not a game; it is simply the observation that genuine professional contribution to the team naturally produces positive feedback from the team.

The Negative 360 Response:

If you receive 360 feedback that identifies specific behavioral or quality concerns, take it seriously. 360 feedback represents the experience of the people who work most directly with you. Dismissing it as biased or motivated by competition is a defensive response that prevents the development it is designed to enable.

The professional response: “I have seen some themes in the 360 feedback about my communication style in client-facing settings. I want to address this specifically. What resources or support does Infosys have for developing more effective client communication?”


The Appeal Process: When and How to Use It

The Infosys appraisal appeal process exists for specific procedural issues, not for general dissatisfaction with a rating outcome. Understanding when it is appropriate and how to use it prevents both missed escalations (not raising legitimate issues) and wasted effort (raising issues that the process cannot address).

What the Appeal Process Can Address:

Procedural errors: the appraisal was conducted by someone who did not have adequate observation of your work for the assessment period (for example, you reported to three different managers in the year due to project changes, and the final manager assessed you on only three months of work without input from the other managers).

Factual inaccuracies: the assessment contains statements about your performance that are demonstrably incorrect. For example: “did not complete the assigned module” when there is documented evidence that the module was completed and signed off.

Process violations: the appraisal process requires certain steps (manager review meeting, documented development feedback) that were not followed. If the formal process was not followed, this can be raised.

What the Appeal Process Cannot Address:

Disagreement with a judgment call: you believe you deserved Band 2 and received Band 3. Unless there is a specific procedural error or factual inaccuracy, disagreement with a judgment call is not grounds for appeal.

Forced distribution outcomes: you received Band 4 because the calibration pool required it, not because of any failure in your individual performance. This is a function of the organizational design, not a procedural error, and cannot be appealed.

Comparison with colleagues: you received Band 4 and believe a colleague who worked less hard received Band 3. Comparisons of this kind are not assessable by the appeal process because the calibration involves many factors and the process cannot independently verify relative performance claims.

How to File an Appeal:

The appeal process in Infosys runs through the HR grievance mechanism (Sparsh). The steps:

Document the specific procedural error or factual inaccuracy clearly. Be specific: “The year-end assessment states that I did not complete the customer migration module. JIRA ticket [ID] shows this module was completed on October 14 and the client acceptance was documented in the Sprint Review minutes of October 18.”

Submit the appeal through Sparsh with supporting documentation.

HR reviews the appeal and consults with the relevant managers and calibration participants. If the appeal identifies a genuine procedural error, the rating is reconsidered through a supplementary calibration.

The Likelihood of Appeal Success:

Appeals based on documented factual inaccuracies have a reasonable chance of producing a re-evaluation. Appeals based on procedural violations have a moderate chance. Appeals based on disagreement with judgment calls without procedural grounds are almost never successful.

The most effective approach is preventing the need for an appeal by engaging seriously with the process throughout the year rather than hoping to correct an unfavorable outcome after the fact.


How Ratings Connect to Promotion, Increment, and Career

The performance rating is not just a number on a form; it connects to every significant career outcome at Infosys. Understanding these connections changes how employees think about the appraisal.

Rating and Promotion Eligibility:

Promotion eligibility at Infosys requires a combination of: minimum time at the current designation, manager recommendation, and a track record of strong appraisal ratings over the promotion-eligibility period.

The specific rating history required for promotion varies by business unit, but a common requirement is Band 2 or better in at least two of the three most recent appraisal cycles. A single Band 3 in an otherwise strong Band 2 history is typically not disqualifying. A pattern of Band 3 ratings without Band 2 in the recent history makes promotion significantly harder.

Band 4 in the promotion-eligibility window is disqualifying in most cases. A Band 4 resets the trajectory and typically requires two consecutive Band 3 or better ratings before promotion consideration resumes.

Rating and Increment:

As described in the Salary Hike guide, the band rating directly determines the increment percentage applied to the fixed CTC. Band 2 produces 10 to 15 percent; Band 3 produces 6 to 9 percent. Over five years, the difference between consistent Band 2 and consistent Band 3 ratings compounds to a significant absolute salary gap.

Rating and Variable Pay:

The individual multiplier applied to the variable pay target is directly derived from the band rating. Band 1 multiplier is highest; lower bands produce progressively lower variable pay payouts. In absolute terms, the variable pay difference between Band 2 and Band 3 at TA level (where variable pay targets are 8 to 10 percent of fixed CTC) can be Rs. 20,000 to Rs. 40,000 annually.

Rating and Project Assignment:

The Resourcing Management Group (RMG) that manages project deployments has visibility into performance ratings. Engineers with strong rating histories are more likely to be deployed on high-profile projects and preferred client accounts. This creates a virtuous cycle: strong ratings lead to better project assignments, which create opportunities for stronger future ratings.

Rating and Internal Mobility:

Internal job posting (IJP) applications are evaluated partly on the applicant’s rating history. A Band 2 history strengthens an IJP application; a Band 4 in the recent history weakens it significantly. Engineers who want to change streams, move to different projects, or pursue practice roles benefit from building a strong rating track record before applying.


Performance Improvement Plans: What They Are and How to Exit One

A Performance Improvement Plan (PIP) is a formal document that Infosys issues to employees whose performance has been assessed as below the minimum acceptable level. Understanding what a PIP is, why it is issued, and how to respond to it determines whether it results in recovery or separation.

When a PIP Is Issued:

A PIP is typically triggered by: a Band 5 rating in an annual appraisal, a Band 4 rating in two consecutive cycles, significant quality or delivery failures that the manager has flagged as serious performance concerns, or behavioral issues that have been documented in the performance record.

The PIP is not the first step; it follows a documented history of performance concerns. Employees who receive a Band 4 without prior warning (their mid-year was positive and no development concerns were communicated) and then face a PIP have grounds to raise the sudden escalation with HR as a procedural concern.

The Structure of a PIP:

A PIP is a formal document that specifies: the specific performance gaps being addressed, measurable improvement targets that must be met within a defined timeframe (typically 60 to 90 days), the support that Infosys will provide to facilitate improvement (additional training, closer supervision, adjusted task scope), and the consequences of not meeting the targets (which may include separation).

The PIP must be agreed to and signed by the employee. Refusing to sign does not prevent the PIP from being effective, but raises questions about engagement that are worth discussing with HR.

How to Respond to a PIP:

The single most important response to a PIP is taking it seriously rather than reacting with denial or resignation. The PIP specifies exactly what needs to improve and by when. It is an unusually specific set of instructions for what success looks like.

Steps when receiving a PIP: Read it carefully. Understand every target and deadline. Ask for a meeting with the manager to clarify anything ambiguous. Assess honestly whether the targets are achievable within the timeframe given the current workload. Begin working on each target immediately, not in the final two weeks of the PIP period. Document your own progress weekly and share it with the manager without waiting for them to ask.

The HR Support Role During a PIP:

HR is required to provide support resources during a PIP: access to specific training, potential workload adjustments, and a designated HR business partner contact for questions. Ask for these explicitly rather than assuming they will be offered.

Exiting a PIP Successfully:

Successfully completing a PIP means meeting each specified target within the defined timeline with documented evidence. When the PIP period ends, the manager reviews progress against each target and makes a determination. Successful completion typically leads to a Band 4 or Band 3 rating in the subsequent cycle and allows the career to continue.

An employee who has completed a PIP successfully has demonstrated genuine responsiveness to feedback, which is itself a quality that managers value. It is not a career-ending event; it is a documented performance reset from which recovery is possible.

When PIP Leads to Separation:

If the PIP targets are not met within the specified timeframe, the typical next step is a mutual separation discussion. This is distinct from a termination for cause; it is a documented managed exit where both parties agree that the role is not a good fit for the employee.

The distinction matters for future employment: a mutual separation can be represented accurately in subsequent employment history, while a termination for cause creates more complex background verification dynamics.


Common Appraisal Mistakes and How to Avoid Them

The following are the most consistently observed appraisal process mistakes, with specific corrections.

Mistake 1: Treating Goal-Setting as a Formality

Many employees enter generic, vague goals in April because they want to complete the goal-setting task quickly. These goals become a liability at year-end because they produce no evidence.

Correction: spend two to three hours on goal-setting in April. Review the goals with the manager and get specific agreement on what Band 2 achievement would look like for each goal.

Mistake 2: Not Documenting Contributions Throughout the Year

Relying on memory at year-end produces an incomplete self-assessment. Contributions from April and May are genuinely difficult to recall in detail in January.

Correction: maintain a running contribution log. After each sprint, write two to three sentences about what was delivered, any client recognition, and any notable quality outcome. This ten-minute weekly habit produces the complete evidence base for the self-assessment in January.

Mistake 3: Submitting a Vague Self-Assessment

The two-sentence-per-dimension self-assessment is the most common appraisal mistake. It gives the manager nothing to use in calibration.

Correction: write a full self-assessment with specific evidence for every dimension. Use the structure: goal statement, achievement, evidence, impact.

Mistake 4: Waiting Until January to Think About the Appraisal

The work that produces a Band 2 rating happens throughout the year. Trying to engineer a Band 2 outcome in the final weeks of the year is too late.

Correction: treat the appraisal as a year-round process. The sprint completed today is evidence for January’s self-assessment. The certification completed in August is documented evidence. The client feedback email in October is filed as evidence.

Mistake 5: Not Having the Pre-Calibration Conversation

Many employees assume the manager is advocating strongly for them in calibration without having any specific conversation to enable that advocacy.

Correction: in early January, before the calibration window, have a specific conversation with the manager about the highlights you want emphasized. Provide the manager with the three to five most compelling pieces of evidence for the band you are targeting.

Mistake 6: Challenging the Rating in the Appraisal Discussion

Arguing emotionally with the manager about an unfair rating in the formal discussion is ineffective and damages the relationship.

Correction: ask for evidence and development feedback rather than contesting the outcome. Channel any formal challenges through the Sparsh appeal process with documented procedural grounds.

Mistake 7: Ignoring Mid-Year Signals

If the mid-year review signals a concern, treating it as a formality and continuing unchanged is a guarantee of a poor year-end rating.

Correction: mid-year concerns are actionable. The second half of the year is the window to change a trajectory. Take any mid-year concern seriously and ask for specific guidance on what change is needed.

Mistake 8: Self-Assessing All Exceptional

An across-the-board Exceptional self-rating is not credible to the manager or to calibration. It reduces the manager’s ability to use the self-assessment as evidence because the calibration will dismiss the manager as someone who cannot make differentiated assessments.

Correction: rate yourself honestly. Exceptional where the evidence genuinely supports it; Exceeds where it supports a meaningful stretch; Meets where you delivered what was required.


Frequently Asked Questions

1. What is the difference between the iRace system and the regular performance appraisal?

iRace is the name of the performance management platform through which the appraisal is conducted. The appraisal is the process; iRace is the tool. All goal-setting, mid-year reviews, self-assessments, and manager assessments are entered through the iRace platform, which feeds into the calibration and increment letter generation process.

2. How long does a self-assessment typically need to be?

A substantive self-assessment for an annual appraisal should have at least three to five sentences per goal dimension, including at least one specific quantified achievement and one impact statement per dimension. For a typical six-dimension appraisal form, this means approximately two to three pages of text in the platform. Longer is acceptable if the evidence is genuinely substantive; shorter signals insufficient engagement.

3. Can I see my manager’s assessment before the calibration is finalized?

No. The manager’s assessment is visible to the employee only after the rating is formally communicated, which happens after calibration is complete. This is deliberate: allowing employees to see draft manager assessments before calibration would create pressure to modify assessments based on employee feedback rather than performance evidence.

4. What happens if I disagree with the goals my manager sets for me?

Goal-setting is intended to be a collaborative process. If goals are set by the manager without your input, request a discussion to review them. If goals are set that you believe are unreasonable or unachievable given the project context, raise this at the goal-setting stage, not at year-end. Document any unresolved disagreements about goal appropriateness in writing to HR.

5. How many previous appraisal cycles are visible to the calibration participants?

Typically, the current year’s assessment and one to two prior years are visible to calibration participants. This history affects promotion considerations: a pattern of Band 3 ratings over three years is relevant context for a promotion recommendation, as is a pattern that shows progression from Band 3 to Band 2.

6. Does the Infosys appraisal consider performance before joining Infosys?

No. The Infosys appraisal assesses performance during Infosys employment only. Prior employment experience may have contributed to your current skill level (and thus your ability to perform well), but the formal assessment record begins with your Infosys joining date.

7. Is the forced distribution applied at the team level or business unit level?

The forced distribution is typically applied at the calibration group level, which may span multiple teams within a delivery unit. The distribution is not rigidly applied within individual teams of five to ten people but must approximately hold across the full calibration group of twenty to fifty or more people.

8. Can I nominate my project lead (who is from the client side) as a 360 feedback provider?

Typically, 360 feedback providers must be Infosys employees because the feedback process runs through Infosys’s internal systems. Client contacts cannot be nominated directly. However, documented client feedback (emails, sprint review comments) that praises specific contributions can be included in the self-assessment and cited by the manager in calibration.

9. What is the impact of a promotion on the appraisal rating for the following year?

A promotion does not directly affect the rating in the following year’s appraisal. However, after a promotion, the employee is assessed at the higher designation level, which typically has higher expectations. A Band 2 TA may be rated Band 3 as an early TL if their performance has not yet reached TL-level expectations. The promotion is an opportunity and an elevated bar simultaneously.

10. How are employees who join mid-year assessed in the appraisal cycle?

Employees who join mid-year are typically assessed on a prorated basis for the period of service in the appraisal year. Goal-setting for mid-year joiners is done shortly after joining rather than in April. The assessment covers only the goals set after joining and the contributions made from the joining date.

11. Can an employee receive two consecutive Band 1 ratings?

Technically yes, if the performance evidence genuinely supports it in both calibration cycles. In practice, Band 1 is limited by the forced distribution to approximately 5 to 10 percent of the calibration pool. An employee who was in the top 5 to 10 percent in one year has a reasonable chance of being in the top 5 to 10 percent the following year if their performance remains at the same level relative to peers. It is not guaranteed by the first Band 1, however.

12. What is the iRace rating scale - is it 1 to 5 or something different?

The rating scale in Infosys’s performance management system has evolved over time and may be presented as a numeric scale (1 through 5), a letter scale, or a label scale (Exceptional, Exceeds, etc.) depending on the version of the platform in use when you are reading this. The underlying logic is consistent: five bands from highest to lowest, with the highest being the most restricted by distribution rules. Always clarify with HR how the current scale maps to the band descriptions.

13. Do certifications from external providers (AWS, Azure, etc.) carry more weight than Lex-completed courses?

Both are valuable but serve different purposes in the appraisal. External professional certifications (AWS SAA, Databricks, ISTQB) are externally verifiable credentials that demonstrate skill at a professionally recognized standard. Lex course completions demonstrate engagement with Infosys’s learning ecosystem and are tracked in the performance record. Strong self-assessments cite both: the external certification as proof of professional skill development, and the Lex hours as evidence of continuous learning engagement.

14. What happens if I change managers mid-year due to a project change?

When a manager change happens mid-year, the appraisal should reflect input from all managers who supervised the employee during the year, not just the current manager. If you change managers in October and the year-end assessment is in January, the new manager should gather input from the previous manager covering the April to October period. If this does not happen automatically, request it: “I would like to make sure my previous manager’s observations are included in the year-end assessment for the period they supervised me.”

15. Is there a way to check what band distribution Infosys has targeted this year?

The specific distribution targets for a given appraisal year are internal HR information and are not typically shared publicly with employees. The broad distribution parameters described in this guide are representative of typical years. HR business partners within the organization may be able to confirm whether the current year’s distribution requirements differ significantly from the typical range.


The Appraisal Year Calendar: A Complete Reference

April 1-30: Goal Setting Open iRace goal module. Review last year’s assessment. Draft new goals. Review with manager. Submit agreed goals by April 30 deadline.

May-August: First Half Execution Deliver against goals. Maintain contribution log. Complete any Q1 certifications or training targets.

September 1-30: Mid-Year Review Open iRace mid-year module. Complete mid-year progress assessment. Have mid-year conversation with manager. Adjust goals if circumstances changed. Get specific feedback on second-half focus.

October-December: Second Half Execution Accelerate delivery toward year-end. Ensure all certifications targeted for the year are completed. Collect any client feedback or recognition that can be referenced. Archive key evidence (sprint reports, certification completion records, client emails).

January 1-31: Year-End Self-Assessment Open iRace year-end assessment module. Write full self-assessment using the evidence collected throughout the year. Rate each dimension honestly. Submit well before the deadline.

Late January to February: Manager Assessment and Calibration Manager completes their assessment. Calibration sessions occur. Ratings are finalized. The employee has no direct involvement in this phase but the pre-calibration conversation in early January was the last influence point.

March: Rating Communication and Increment Letter Manager communicates the rating in a one-on-one discussion. Increment letter issued through iRace or HR system. Review the letter. Raise any errors through Sparsh immediately.

April 1: New Financial Year Begins Revised salary effective. New goal-setting cycle opens. Start the next appraisal year with the previous year’s development feedback in hand.

This twelve-month calendar, followed deliberately, is the operational plan for maximizing appraisal outcomes. The appraisal is not an event in February; it is a year-round process in which the February assessment is only the culmination of what was built from April.


The Psychology of the Appraisal: What Research Tells Us

Understanding the psychological dynamics of performance appraisals helps employees engage with the process more effectively. Several well-documented cognitive biases affect appraisal outcomes at Infosys and in every large organization.

The Halo Effect:

When a manager has a strongly positive impression of an employee in one dimension, this tends to inflate their assessment of other dimensions as well. An engineer who is technically brilliant often receives higher ratings on communication and teamwork than their actual behavior in these areas strictly warrants, because the technical excellence creates a positive halo.

The implication: establishing excellence in the dimension your manager values most highly early in the year creates a halo that benefits all other dimensions in the assessment.

The Recency Bias:

More recent events are disproportionately weighted in the assessment. Work done in November and December is remembered more vividly than work done in May. A strong finish to the year compensates for a slower middle.

The implication: maintain consistent delivery throughout but be aware that the last two months of the appraisal year are the most visible in the manager’s memory when they complete the assessment in January.

The Attribution Error:

Successes tend to be attributed by managers to the team rather than the individual, while failures tend to be attributed to individuals rather than circumstances. A successful project delivery is “the team did great work”; a delayed module is “[person’s name]’s module was delayed.”

The implication: be deliberate about making your individual contributions visible within team successes. “The customer migration project delivered on time - my specific contribution was the data transformation layer” is how individual attribution is established.

The Similarity Bias:

Managers unconsciously rate employees who are similar to themselves (communication style, approach to work, background) more favorably than those who are different. This is a documented cognitive bias, not intentional discrimination.

The implication: understanding what your manager values and adapting your communication and work style to align with it (while remaining authentic) reduces the similarity bias friction.

The In-Group Bias:

Employees who are physically present with the manager (in the office, on-site) tend to receive slightly higher ratings than equivalent employees who work remotely, because presence creates more observation opportunities and relationship reinforcement.

The implication: for remote workers, deliberate communication visibility (proactive status updates, regular check-ins, presence in virtual meetings) compensates for the reduced physical presence.


Building the Evidence File: A Practical System

The most consistently successful Infosys appraisal performers maintain an evidence file throughout the year. The following describes a practical system for this.

The Weekly Two-Minute Log:

Every Friday afternoon (or the last working day of the week), write two to three sentences answering these questions:

What did I complete this week that I could reference in my self-assessment? Was there any client recognition, positive feedback, or notable quality outcome this week? Did I complete any certification module, training, or learning activity this week?

This weekly log takes two minutes in the moment and produces a complete record of 52 weeks of contributions that makes the January self-assessment easy to write.

The Evidence Categories to Track:

Delivery: sprint completions, feature releases, bug fixes with specific ticket numbers, module go-lives. Quality: any quality metrics relevant to your work (defect rate, test coverage, code review outcomes). Recognition: client feedback emails (saved with date), positive manager comments, peer recognition, sprint review shout-outs. Learning: certification completions with dates, Lex course completions, external training attended. Initiative: documents or tools created beyond assigned scope, process improvements proposed, internal knowledge-sharing sessions conducted. People: mentoring instances, onboarding support provided, cross-team collaboration that produced specific outcomes.

Storage:

Use a simple document (Google Doc, OneNote, or equivalent) organized by week. At the end of each quarter, review the entries and group them by appraisal dimension. By January, the evidence is organized and ready to use in the self-assessment.

The Client Email Archive:

Any email, Slack message, or Teams message from a client or senior stakeholder that praises specific work should be saved with the date and the sender’s name. These direct quotes are powerful calibration evidence: “Client project lead specifically stated on October 12 that the data migration was ‘the most reliable delivery we have seen from any vendor partner’” is a statement that managers use in calibration because it is verifiable and comes from a source outside the team.


Appraisal Conversations at Different Designation Levels

The appraisal conversation looks different at SE level versus TL level because the expectations being assessed differ at each designation level.

SE Level (0-2 Years):

The primary assessment dimensions at SE level are: technical delivery quality (did the code work, were tests written, were bugs introduced), learning velocity (is the skill set growing), and professional behavior (reliability, communication, attitude toward feedback).

The SE appraisal conversation typically focuses on: specific sprint delivery outcomes, feedback on code quality from reviews, certification progress, and guidance on which technical areas to develop next.

The self-assessment at SE level should demonstrate: consistent sprint delivery, genuine engagement with learning (certifications, Lex courses, skill application), positive peer and manager feedback, and at least one initiative beyond the assigned scope (documentation, mentoring a newer joiner, raising a process improvement).

SSE Level (2-5 Years):

At SSE level, the assessment shifts toward: greater autonomy in delivery (can complete complex tasks with minimal supervision), mentoring contributions to junior team members, and beginning to take module or component ownership rather than individual task ownership.

The appraisal conversation at SSE level discusses: module delivery outcomes, junior team member development contributions, technical knowledge depth, and readiness for TA-level responsibilities.

TA Level (5-8 Years):

TA appraisals assess: technical design contribution (not just implementation), independent project component ownership, client interaction quality, and beginning people leadership.

The appraisal conversation at TA level is more strategic: what is the career direction (individual contributor track toward architecture, or people manager track), what specific technical specialization is being built, and what is the readiness for TL-level delivery management.

TL and DM Level (8+ Years):

Senior level appraisals assess: delivery of the team’s outcomes (not just individual output), client relationship development, practice and capability contributions, and P&L awareness (for DMs).

The appraisal conversation at TL/DM level is substantially about leadership effectiveness: what did the team deliver under your management, how did you develop the team’s capabilities, what did you contribute beyond the immediate project.


The Infosys Appraisal and Career Planning: Connecting the Dots

The appraisal is not just a compensation mechanism; it is the primary formal mechanism through which Infosys knows what you are doing, where your strengths lie, and where you want to go. Using the appraisal as a career planning tool, not just a compensation event, changes how it is engaged with.

The Appraisal as Career Signal:

The goal categories you set, the self-assessment you submit, and the development feedback you ask for are all career signals to the manager and to HR. An employee who consistently sets goals involving cloud architecture and requests development feedback on architecture skills is signaling a career direction. This signal influences project assignment decisions and promotion consideration framing.

Be deliberate about the career signal your appraisal sends. If you want to move into a leadership track, your goals should include people development contributions and your self-assessment should highlight people leadership outcomes. If you want to move into a technical specialist track, your goals should include technical research and contribution and your self-assessment should highlight technical depth.

Using the Development Feedback as a Career Roadmap:

The development feedback from the appraisal discussion is the manager’s direct observation of what you need to build to progress. This is information that is difficult to get in any other structured way. Use it seriously: turn the development areas identified in the appraisal into specific goals for the next year.

An employee who says “last year you identified client communication as a development area. I have made it a specific goal this year, including volunteering for the client sprint reviews and completing the Infosys presentation skills module. Here is what I did” is demonstrating exactly the responsiveness to feedback that produces Band 2 ratings.

The Multi-Year Pattern:

Career progression at Infosys is not evaluated on single-year performance but on multi-year patterns. A Band 2, Band 2, Band 3 pattern over three years is typically read as “strong performer going through a challenging year.” A Band 3, Band 3, Band 3 pattern is typically read as “consistently adequate but not progressing.” A Band 4, Band 3, Band 2 pattern is typically read as “recovering and demonstrating growth.”

Patterns matter. The first appraisal establishes the starting point; the trend across years is what promotion committees evaluate.


Quick Reference: Appraisal Do’s and Don’ts

Do:

  • Set specific, measurable goals in April with manager agreement
  • Maintain a weekly contribution log throughout the year
  • Complete the mid-year review form genuinely and use the conversation to course-correct
  • Write a full, evidence-rich self-assessment with specific quantified achievements
  • Have the pre-calibration conversation with your manager in early January
  • Nominate 360 feedback providers who have direct visibility into your strongest contributions
  • Ask for specific development feedback in the appraisal discussion
  • Check the increment letter arithmetic before the April payroll processes
  • Use the appeal process if you have documented procedural grounds

Don’t:

  • Treat goal-setting as a formality requiring minimal thought
  • Wait until January to think about the appraisal year
  • Write a vague self-assessment (“I worked hard and contributed effectively”)
  • Rate yourself Exceptional in every dimension
  • Challenge the rating in the appraisal discussion without procedural grounds
  • Compare yourself to named colleagues in appraisal discussions
  • Ignore mid-year performance signals
  • Assume the manager will remember everything you did without prompting
  • Mistake variable pay for guaranteed compensation when planning expenses

Closing: The Appraisal Is a Tool, Not a Verdict

The Infosys performance appraisal, navigated well, is a powerful career tool. It provides a structured annual opportunity to make visible what you have contributed, receive specific feedback on what to develop, and establish your professional trajectory in the organizational record.

Navigated poorly, it is an annual source of disappointment and confusion: unexpected ratings, inadequate feedback, and the feeling that the outcome was arbitrary.

The difference between these two experiences is almost entirely in the deliberateness with which the employee engages with the process. The system is not perfectly designed; no performance management system is. The forced distribution creates real injustices in specific years for specific teams. The recency bias affects all managers regardless of their intention.

But within these structural constraints, the employee who sets strong goals in April, documents contributions throughout the year, writes a specific and evidence-rich self-assessment, has the pre-calibration conversation in January, asks for specific development feedback in March, and uses that feedback to build next year’s goals is doing everything within their power to produce the best available outcome.

The appraisal system responds to evidence. Build the evidence throughout the year. Present it clearly at year-end. The outcomes, while never entirely within any individual’s control, are materially better for the employee who does this than for the one who does not.

This guide, and the 30-article InsightCrunch Infosys Series of which it is part, provides the complete information infrastructure for navigating every stage of the Infosys career journey with exactly this kind of deliberateness.

Article 26 of the InsightCrunch Infosys Series.


Worked Examples: Strong vs Weak Self-Assessment Entries

The gap between a weak and a strong self-assessment entry is clearer when shown side-by-side. The following pairs illustrate the difference across five key dimensions.

Technical Delivery:

Weak: “I worked on various modules and delivered my tasks. I was always available to help the team with technical issues.”

Strong: “I delivered 23 user stories across Sprint 3 through Sprint 8 totaling 134 story points, representing 31 percent of the team’s total sprint velocity during Q2 and Q3. I took ownership of the payment reconciliation module after the original developer was reassigned in August and stabilized it within three weeks, enabling the client’s August release milestone to be met on schedule. My module had zero severity-1 production defects post-release. The release retrospective notes (August 30) specifically cite the reconciliation module stability as a highlight.”

Learning and Development:

Weak: “I completed several training courses and improved my skills in cloud and data areas.”

Strong: “I completed the AWS Solutions Architect Associate certification on July 14 (certificate ID: AWS-SAA-2024-XXXXX). I completed 18 Lex courses totaling 42 learning hours, including the Databricks Lakehouse Fundamentals path (completed September 8). I applied the Databricks knowledge directly in the Q3 data migration project, where I redesigned the transformation layer to use Delta Lake format, reducing pipeline execution time by 37 percent. This optimization was flagged by the TL in the Q3 retrospective as a significant technical improvement.”

Communication:

Weak: “I communicated effectively with the team and clients throughout the year.”

Strong: “I presented technical architecture updates in 6 of the 8 client sprint review calls during Q2 and Q3, receiving positive written feedback from the client project manager on two occasions (emails dated August 7 and October 12). I produced the team’s technical runbook for the payment module (15-page document) which was adopted as the standard template for three subsequent module runbooks. I facilitated three weekly knowledge-sharing sessions with the team on AWS Lambda and API Gateway, attended by an average of eight team members.”

Teamwork and People Development:

Weak: “I was a good team player and helped new joiners when needed.”

Strong: “I formally onboarded two new joiners (Ananya joined May 15, Rohan joined August 1). I conducted structured pair programming sessions with each for their first four sprints and reviewed their code daily for the first two sprints. By Sprint 3, Ananya’s average story point delivery per sprint matched the team average. I raised two concerns in retrospectives that were converted into sprint process improvements (reducing manual testing steps for regression cases), which the team estimated saves approximately two hours per sprint cycle.”

Initiative and Innovation:

Weak: “I took initiative in identifying issues and suggesting improvements where possible.”

Strong: “I identified that the team’s deployment pipeline was requiring manual intervention in 3 of 5 deployments due to an unhandled API timeout condition. Without being asked, I researched the issue, built a proof of concept fix in two evenings, and presented it to the TL. The fix was implemented in Sprint 7 and reduced manual deployment interventions to zero over the subsequent 8 deployments. I estimated this saves approximately 30 minutes per deployment cycle. I also contributed an article to the Infosys internal knowledge base on handling API timeout conditions in Kubernetes environments, which received 47 views from engineers across three other projects.”

The pattern across all five examples: the strong entries name specific deliverables, provide specific metrics or evidence, cite specific dates or references where available, and state a concrete impact. The weak entries describe behavior in vague, unverifiable terms. The difference in calibration effectiveness is substantial.


Infosys Appraisal Glossary

Band: the performance rating category (Band 1 through Band 5) that determines increment and variable pay outcomes.

Calibration: the multi-level process through which individual manager assessments are reviewed and adjusted to produce a rating distribution consistent with the forced distribution requirements.

Forced Distribution (Bell Curve): the organizational requirement that ratings across a calibration pool follow a pre-defined percentage distribution, limiting how many employees can receive the highest and lowest bands.

Goal Setting: the April process of defining specific performance targets for the appraisal year, entered in the iRace platform and agreed with the manager.

iRace: Infosys’s performance management platform (or its current equivalent) through which all appraisal-related activities are conducted.

Individual Multiplier: the variable pay multiplier derived from the band rating, applied to the target variable pay calculation.

Mid-Year Review: the September/October interim appraisal check that documents progress against year goals and provides an opportunity to course-correct.

Performance Improvement Plan (PIP): a formal document issued to employees with sustained below-standard performance, specifying measurable improvement targets within a defined timeframe.

Self-Assessment: the structured document the employee completes at year-end, rating their own achievement against each goal dimension with supporting evidence.

360-Degree Feedback: supplementary feedback from peers, cross-functional contacts, and direct reports (for managers) that provides additional perspectives on the employee’s performance beyond the manager’s direct observation.

Variable Pay Target: the maximum variable pay amount that would be paid if the company, unit, and individual multipliers all equaled 1.0. The actual payout is this target multiplied by all three multipliers.


The Ten Most Important Things to Know About the Infosys Appraisal

  1. The appraisal is annual but built year-round. The evidence that determines your rating is created throughout the year, not in January.

  2. The self-assessment is your most powerful tool. It provides the manager with the specific evidence needed for calibration advocacy.

  3. Forced distribution is real. In any calibration pool, a proportion of employees must receive each band regardless of absolute performance. Your band reflects your relative position in your peer group.

  4. Goals set in April determine what can be evidenced in January. Vague goals produce vague evidence; specific goals produce specific evidence.

  5. The pre-calibration conversation is your last influence point. After calibration is finalized, the rating cannot be changed without procedural grounds.

  6. Variable pay is not guaranteed. It is a target subject to company, unit, and individual multipliers. Plan household expenses on fixed pay only.

  7. Band 4 is recoverable. It is not a career-ending event. It triggers a performance improvement process. The question is what specifically needs to change, not whether the career is over.

  8. The mid-year review is a course-correction opportunity. If the mid-year signals a concern, the second half of the year is the window to address it.

  9. The appeal process has specific grounds. It addresses procedural errors and factual inaccuracies, not disagreements with judgment calls.

  10. The appraisal sends career signals. The goals you set, the skills you develop, and the development feedback you ask for all communicate your career direction to management.

These ten points summarize the complete guide. The details matter; the guide exists to provide them. But these ten points are the operating principles that drive the difference between a deliberate appraisal strategy and a reactive one.


When Good Employees Receive Band 4: Understanding the Context

One of the most demoralizing appraisal outcomes at Infosys is receiving a Band 4 when the employee genuinely believes their performance was adequate. This happens more often than most employees realize and for reasons that are important to understand rather than simply accept as unfair.

The High-Performing Team Scenario:

In a project team where all ten engineers are genuinely above average (this happens, particularly in premium digital projects where the hiring bar was higher), the forced distribution still requires two of the ten to receive Band 4. The engineers who receive Band 4 in this scenario are not the weakest performers in absolute terms; they are the relatively weakest performers in an above-average team.

Recognizing this context does not make the Band 4 less frustrating financially, but it changes the interpretation. A Band 4 in a high-performing team is a different situation from a Band 4 in a team where the rating reflects genuine performance deficiencies.

The Evidence Gap Scenario:

A second common Band 4 scenario: the engineer genuinely performed well, but the manager had inadequate evidence to defend Band 3 or Band 2 in calibration, and when challenged by the senior manager, could not articulate specific, verifiable contributions. The rating dropped not because the performance was poor but because the evidence was thin.

This is the most preventable Band 4 scenario. It is entirely caused by the employee’s failure to provide the manager with adequate evidence through the self-assessment and throughout-the-year documentation. The performance was present; the evidence was not.

The Project Context Scenario:

In some years, a specific project had challenging circumstances: a difficult client, significant scope changes, technical debt from previous teams, or understaffed delivery. Engineers on these projects may have worked extremely hard and still delivered results below what was possible on less challenging projects. The calibration system does not fully account for project difficulty when comparing across teams.

Recognizing when your Band 4 reflects project context rather than personal performance is important for your own career narrative. The manager who understands this context should communicate it clearly in the appraisal discussion.

What to Do With a Contextual Band 4:

Ask the manager explicitly: “Is this Band 4 a reflection of performance concerns you believe I need to address, or does it reflect the calibration pool dynamics this year?” The answer to this question determines the appropriate response.

If it is a calibration-driven Band 4 with no genuine performance concern: accept it, request a Band 3 target for next year with specific guidance on what evidence would support that, and invest in the evidence-building process described throughout this guide.

If it is a genuine performance Band 4: engage seriously with the development feedback and the PIP process if triggered. This is the more consequential scenario that requires deliberate corrective action.


Preparing Your Manager for Calibration: A Practical Brief

The most effective pre-calibration action is creating what practitioners call a “calibration brief” - a short, structured summary you provide to your manager before the calibration session that organizes your evidence in the exact format the manager needs for the calibration discussion.

The One-Page Calibration Brief:

Create a document (physical or digital, shared with the manager) that contains:

Name and designation: your name and current band. Target band: the band you believe your performance supports (be realistic). Top three evidence points: the three most compelling, specific, verifiable contributions from the year. Format: Contribution - Evidence - Impact (one sentence each). Learning achievements: certifications completed with dates, Lex hours, skills applied in project work. Client or stakeholder recognition: any specific positive recognition from client, senior stakeholders, or peers, with dates. Development focus next year: what you plan to build in the next cycle (signals forward commitment).

Example Calibration Brief Content:

Name: Priya Sharma, SSE Target: Band 2

Top Three Evidence:

  1. Payment module delivery: Led the payment reconciliation module from design to production in 8 weeks with zero severity-1 defects post-release. Client signed off August 30.
  2. Team development: Onboarded two new joiners; both reached team-average delivery velocity within their third sprint.
  3. Process improvement: Automated a manual deployment step that was causing 3/5 deployments to require intervention. Zero manual interventions required in the 8 deployments since implementation.

Learning: AWS SAA certified July 14. 42 Lex learning hours. Databricks Lakehouse certification August 31.

Client recognition: Client PM positive feedback emails August 7 and October 12 (available for manager reference).

Development focus: Cloud architecture ownership of a larger component in the next project.

This brief takes thirty minutes to create and gives the manager everything needed for a strong calibration argument. It is not a demand; it is support material for advocacy.

Provide it to the manager before the calibration window opens (late January) with the framing: “I have put together a summary of my key contributions to help you represent my work in the calibration discussion. I want to make sure you have the specific evidence readily available.”

This approach is professional, practical, and positions you as an engineer who understands how the system works and participates in it collaboratively.


Article 25 (Infosys Salary Hike and Increment Guide) covers what the band rating produces financially: the exact increment percentages by band, the variable pay formula, the promotion increment mechanics, and the five-year salary trajectory scenarios. This article covers the process that produces the rating; Article 25 covers what the rating means financially.

Article 6 (Infosys Career Growth and Promotion Path) covers the promotion eligibility criteria, the promotion timeline by designation, and the career tracks available within Infosys. The appraisal rating history is the primary input to the promotion process described there.

Article 21 (Infosys Fresher First 90 Days on a Project) covers the first appraisal cycle from a fresher perspective: what strong first-year performance looks like in practice, and how the first appraisal discussion typically goes.

Together, Articles 6, 21, 25, and 26 provide the complete career management framework for every stage of an Infosys employee’s journey.

Article 26 of the InsightCrunch Infosys Series. Read all 30 articles at insightcrunch.com.


Final Checklist: Have You Done Everything This Appraisal Cycle?

Use this checklist at the start of each appraisal year to confirm every high-impact action has been taken.

April (Goal Setting):

  • Reviewed last year’s appraisal feedback before setting new goals
  • Set specific, measurable goals for each appraisal dimension
  • Reviewed goals with manager and received explicit agreement
  • Goals entered in iRace before the April deadline
  • Goals include at least one stretch goal targeting Band 2 level achievement

Ongoing (Throughout Year):

  • Weekly contribution log maintained (2-3 sentences per week)
  • Client or stakeholder positive feedback archived with dates
  • Sprint completion records accessible (JIRA reports or equivalent)
  • Certifications completed with completion dates documented
  • Lex course completions tracked

September/October (Mid-Year):

  • Mid-year form completed in iRace
  • Mid-year conversation held with manager
  • Any goal updates made in iRace for changed project circumstances
  • Asked manager: “What would Band 2 look like for the second half?”
  • Any mid-year performance concerns heard and actioned

January (Year-End Assessment):

  • Evidence file reviewed and organized by appraisal dimension
  • Self-assessment written with specific quantified evidence per dimension
  • Self-ratings are honest and differentiated (not all Exceptional)
  • Self-assessment submitted at least one week before deadline
  • Calibration brief prepared and shared with manager
  • Pre-calibration conversation held with manager

March (Appraisal Discussion):

  • Rating communication heard fully before responding
  • Development feedback asked for specifically
  • Next year’s development areas noted
  • Increment letter received and arithmetic verified
  • Any errors raised through Sparsh before April payroll

April (New Year Start):

  • Previous year’s development feedback incorporated into new goals
  • Investment declaration submitted reflecting post-increment salary
  • New goal-setting cycle started with lessons from previous year

Completing every item on this checklist in every appraisal cycle is the operational plan for consistently achieving the best available appraisal outcome. The system is not perfectly designed, but it responds to evidence, engagement, and deliberate advocacy. This checklist operationalizes all three.