Your ILP rating is the first professional performance metric of your career. It influences your project allocation, your base branch flexibility, and the first impression you make on every manager who reviews your profile after training. Yet most freshers have no idea how the rating is calculated, what weight each assessment carries, or how to strategically maximize their cumulative score across 60 days of assessments.
Some freshers treat ILP ratings as a mystery, something that happens to them after the training ends. Others treat it as trivial, assuming that post-ILP performance is all that matters. Both perspectives are wrong. The rating is neither mysterious nor trivial. It is a calculated metric with specific inputs, and understanding those inputs lets you influence the output deliberately.
TCS ILP Ratings - How They Work and Why They Matter for Your Career
This guide demystifies the TCS ILP rating system entirely. We cover every component that feeds into the final rating, the approximate weighting of each component based on alumni analysis, the strategic implications for how you allocate your preparation effort, the real-world impact of ratings on project allocation and career progression, and the specific actions that separate top-rated freshers from average ones.
For targeted practice on every assessment that feeds into your rating, use the TCS ILP Preparation Guide tool on ReportMedic.
The Rating Scale
TCS ILP ratings are expressed on a numerical scale, though the exact scale has varied across batches. The most commonly reported scale uses a 1 to 5 range, where 5 represents exceptional performance and 1 represents unsatisfactory performance. Some batches have used a 1 to 10 scale or letter grades that map to numerical equivalents.
Regardless of the specific scale, the relative distribution follows a consistent pattern. A small percentage of freshers (roughly 10 to 15 percent) receive top-tier ratings that mark them as exceptional performers. The majority (roughly 60 to 70 percent) receive mid-tier ratings that indicate solid, satisfactory performance. A smaller group (roughly 15 to 25 percent) receives lower ratings that indicate areas for improvement. And a very small percentage (those in LAP or with significant assessment failures) receive the lowest ratings.
The rating is not a simple pass/fail. It is a differentiated metric that creates a ranking within your batch. This ranking directly influences the opportunities available to you after ILP, which is why understanding and optimizing the rating matters.
What the Numbers Mean in Practice
A rating above 4 (on a 5-point scale) is considered very good to excellent. Freshers with 4+ ratings typically receive priority in project allocation interviews, more choices among available projects, and greater flexibility in base branch requests. They are the first names that Resource Management Group (RMG) managers pull when high-profile projects need fresh talent. In competitive batches, a 4+ rating is the threshold that separates freshers who choose their projects from freshers who take whatever is offered.
Alumni from past batches report specific experiences tied to rating ranges. One alumnus from a Hyderabad Java batch who received a 4.3 rating described receiving three project interview calls within one week of ILP completion. A batchmate with a 4.6 rating received five calls in the same period. Another batchmate with a 3.1 rating waited over a month for a single call.
A rating between 3 and 4 is considered average to good. This is where the majority of freshers fall, and the experience within this range varies. A 3.8 is functionally closer to the 4+ experience than to the 3.2 experience. Freshers at the upper end of this range receive reasonable project opportunities with moderate flexibility. Those at the lower end may experience longer wait times and fewer choices.
A rating below 3 is considered below average. Freshers in this range may experience longer bench periods (four to eight weeks versus one to two weeks for top-rated peers), fewer project choices, less flexibility in base branch allocation, and in some cases, additional skill-building requirements before project deployment. A rating below 3 does not prevent project allocation, but it narrows the options significantly.
Alumni emphasize that the rating difference between 3.5 and 4.0 can feel disproportionately impactful. The 4.0 threshold appears to be an informal signal that many project managers and RMG staff use as a filter when reviewing fresher profiles. Crossing that threshold can make a meaningful difference in the speed and quality of your project allocation.
Rating Distribution from Past Batches
Based on alumni reports across multiple batches and centers, the typical rating distribution looks approximately like this.
Ratings of 4.5 to 5.0: approximately 5 to 8 percent of the batch. These freshers are identified as exceptional and may be considered for high-profile projects, Differential batch opportunities, or early leadership roles. They represent the combination of strong technical scores, strong BizSkills scores, excellent project contributions, and consistently professional behavior.
Ratings of 4.0 to 4.4: approximately 10 to 15 percent. Strong performers who benefit from priority project allocation and base branch flexibility. Their profiles are competitive for most project requirements.
Ratings of 3.5 to 3.9: approximately 30 to 35 percent. The solid majority. Good performance across most components with perhaps one weaker area. Standard project allocation experience.
Ratings of 3.0 to 3.4: approximately 25 to 30 percent. Adequate performance but with noticeable gaps in one or more areas. May experience slower project allocation.
Ratings below 3.0: approximately 10 to 20 percent. Includes freshers who were in LAP, who had significant assessment failures, or who demonstrated inconsistent engagement. The slowest project allocation experience and the most constrained options.
These distributions are approximations based on alumni reporting, not official TCS data. But they provide a realistic framework for understanding where you want your rating to fall and what the competition looks like within your batch.
The Components of Your ILP Rating
Your final rating is a weighted composite of multiple assessment components accumulated across the entire ILP duration. Understanding each component and its approximate weight helps you allocate your preparation effort strategically.
Component 1: IRA Scores (Approximate Weight: 5-10%)
Your IRA1 and IRA2 scores are the first data points in your cumulative rating. While their weight in the final calculation is relatively modest (approximately 5 to 10 percent combined), they establish the trajectory. A strong IRA performance creates positive momentum. A weak IRA performance means you start from behind and need to compensate with stronger subsequent scores.
IRA1 (Aspire-based, 40 questions, 30 minutes, no negative marking) has a pass mark of approximately 55 out of 100. IRA2 (Tech Lounge-based, 30 questions, 75 minutes, with negative marking) is more challenging and tests stream-specific knowledge. Both scores are recorded and fed into the cumulative calculation.
The strategic implication is clear: do not dismiss IRAs as “just pass/fail.” Every mark above the pass threshold contributes positively to your cumulative score. A fresher who scores 85 on IRA1 and 70 on IRA2 enters the diagnostic phase with a materially better baseline than a fresher who scored 56 and 45 on the same assessments.
Past batch alumni who achieved top ratings consistently report strong IRA scores as part of their profile. One alumnus from the Trivandrum center who received a 4.5 rating shared: “I scored 82 on IRA1 and 68 on IRA2. Those scores did not make my rating by themselves, but they gave me a cushion that reduced pressure on every subsequent assessment.”
Component 2: Technical Diagnostics (Approximate Weight: 25-35%)
Section diagnostics are the regular assessments conducted throughout Phase 1 at the end of each curriculum section. The number of diagnostics varies by stream (typically 5 to 10 across the training period), and each covers the specific technical content of its section.
The pass mark for diagnostics is 65%. Scores above the pass mark contribute positively to your cumulative rating in proportion to how far above 65% they are. Scores at exactly 65% contribute minimally. Scores below 65% trigger the re-do and remedial sequence and, if all attempts are failed, LAP placement.
Diagnostics carry the heaviest assessment weight in the rating calculation (approximately 25 to 35 percent of the total), which makes them the single most impactful component you can influence through preparation.
The key insight from alumni analysis is that consistency matters more than occasional brilliance. A fresher who scores 80, 82, 78, 85, 80, 77, 83, 79 across eight diagnostics (consistently high) will likely receive a higher rating contribution from diagnostics than a fresher who scores 95, 70, 90, 66, 85, 68, 92, 72 (highly variable, with some scores barely above the pass mark).
This is because variable scores suggest inconsistent engagement with the curriculum: the high scores indicate capability, but the low scores indicate gaps. Consistent scores suggest steady, reliable learning, which is the quality TCS values most in project-ready freshers.
Real diagnostic question patterns from past batches follow the patterns documented in detail in our articles on IRA assessments and Aspire preparation. Code output prediction, SQL query construction, OOP concept identification, error identification, and scenario-based application questions form the core of most diagnostics across all programming streams. For ITIS, networking (OSI model, subnetting, DNS/DHCP) and ITIL lifecycle questions dominate. For SAP, transaction code identification and process flow sequencing are the primary patterns.
Component 3: BizSkills Assessments (Approximate Weight: 15-20%)
BizSkills assessments evaluate your professional communication skills through speaking and writing tests. The speaking assessment is typically a structured interaction with an assessor: a brief presentation, a conversation on a professional topic, or a role-play scenario. The writing assessment tests your ability to compose professional emails, short reports, or written responses to workplace situations.
The pass mark is 65% for both speaking and writing assessments. BizSkills scores contribute approximately 15 to 20 percent of the total rating. Given that BizSkills failure is the single most common cause of LAP placement (as reported by alumni across multiple batches), this component deserves preparation attention disproportionate to its percentage weight.
The reason is asymmetric risk. A poor technical diagnostic score can be offset by strong scores on other diagnostics. But a BizSkills failure triggers LAP, which extends your ILP and significantly reduces your final rating regardless of how well you performed technically. Protecting yourself from BizSkills failure is a higher priority than maximizing your score on any single technical diagnostic.
Past batch alumni who achieved top ratings universally describe strong BizSkills performance as a key differentiator. The freshers who scored highest on BizSkills were not necessarily native English speakers. They were the ones who practiced spoken English daily, read English content regularly, and treated BizSkills sessions as seriously as technical sessions.
One alumna from the Chennai batch who received a top-tier rating reflected: “I was from a Tamil-medium school and English was not my strong suit. But I decided that I would speak only English during ILP, even when it was uncomfortable. By Week 4, my BizSkills assessor commented that my communication had improved noticeably. That improvement was reflected in my BizSkills score, which pulled my overall rating up.”
Component 4: PRA Score (Approximate Weight: 15-20%)
The Performance Readiness Assessment is the comprehensive technical test that covers the entire Phase 1 curriculum. It is typically worth 100 marks and is more challenging than individual diagnostics because it tests integrated understanding across all sections rather than isolated section knowledge.
The PRA carries approximately 15 to 20 percent of the final rating weight, making it the single highest-stakes individual assessment in ILP. A strong PRA score can elevate your overall rating significantly. A weak PRA score can drag down an otherwise solid diagnostic track record.
PRA questions from past batches are reported to include multi-concept scenarios: a question that requires applying both OOP knowledge and JDBC in a single answer, or a scenario that tests SQL and programming logic together. The integrated nature of PRA questions is what makes it harder than diagnostics, which test sections in isolation.
Alumni who scored highest on the PRA consistently describe holistic revision as their preparation strategy. Rather than re-reading each section’s notes independently, they created concept maps that connected topics across sections. “How does the Java Collections framework connect to JDBC ResultSet processing?” “How do servlets use OOP principles?” These cross-cutting questions are exactly what the PRA tests.
The TCS ILP Preparation Guide on ReportMedic includes PRA-style integrated practice questions modeled on patterns reported by past batch alumni across major streams.
Component 5: Project Phase Evaluation (Approximate Weight: 20-25%)
The Phase 2 project evaluation is a composite assessment that covers your code quality, your documentation quality, your team collaboration, your individual contribution, and your final presentation performance. It is evaluated by the technical and BizSkills faculty who observe your team throughout the project phase and assess the final presentation.
This component carries approximately 20 to 25 percent of the total rating weight, making it the second most impactful component after technical diagnostics. Unlike diagnostics, which are purely individual assessments, the project evaluation includes both team and individual dimensions.
The team dimension assesses the overall quality of the project: completeness of requirements implementation, code functionality, documentation thoroughness, testing coverage, and presentation clarity. All team members benefit from a strong team result and are affected by a weak team result.
The individual dimension assesses your specific contributions: the code you wrote, the documentation sections you authored, your participation in team meetings and status reports, your behavior during the construction and testing phases (as observed by faculty), and your individual performance during the Q&A portion of the final presentation.
Alumni from past batches report that the project evaluation is the component where subjective faculty assessment has the most influence. Unlike diagnostics (which are objectively scored based on correct answers), the project evaluation involves judgment calls about code quality, documentation thoroughness, and team contribution that are inherently subjective.
This subjectivity makes professional behavior during the project phase critically important. Being visible, being helpful, communicating proactively, volunteering for challenging tasks, and maintaining a positive attitude are all observed by faculty and factored into their individual assessments. The freshers who are quietly competent but invisible may not receive the evaluation credit that their actual contribution deserves.
Component 6: Faculty Observations and Subjective Assessment (Approximate Weight: 5-10%)
Beyond the formal assessment components, faculty observations throughout ILP contribute a subjective assessment that influences the final rating. This component is the hardest to quantify but is consistently reported by alumni as a real factor.
Faculty observe your attendance, your punctuality, your participation in sessions, your behavior in group activities, your attitude toward learning, your helpfulness toward batchmates, and your overall professional demeanor. These observations accumulate into an impression that the faculty express through the subjective component of the evaluation.
The practical implication is that the “soft” behaviors matter: arriving on time, asking thoughtful questions, helping a struggling batchmate, volunteering for group activities, and maintaining a positive, engaged attitude. These behaviors do not have a specific mark value, but they create the faculty impression that can tip a borderline rating upward rather than downward.
The Rating Calculation: How Components Combine
While TCS does not publish the exact formula for ILP rating calculation, alumni analysis across multiple batches suggests a weighted average model where the component weights are approximately as described above.
A simplified illustration: if a fresher scores 80% across diagnostics (weight 30%), 75% on BizSkills (weight 17.5%), 78% on PRA (weight 17.5%), earns a strong project evaluation equivalent to 82% (weight 22.5%), has IRA scores averaging 75% (weight 7.5%), and receives a positive subjective assessment equivalent to 80% (weight 5%), the weighted average would be approximately 79%. On a 5-point scale mapped from 0-100%, this would translate to approximately 4.0, which is in the “good to very good” range.
The actual calculation may be more complex, with non-linear mappings, floor and ceiling effects, and batch-specific normalization. But the directional insight is valid: high, consistent scores across all components produce a high rating. A single weak component can drag down an otherwise strong profile. And the relative weights suggest that technical diagnostics and the project evaluation are the two areas where preparation effort has the highest return.
Strategic Implications: Where to Focus Your Effort
Given the component weights, the optimal effort allocation strategy follows a clear hierarchy.
Priority 1: Protect Against BizSkills Failure
The asymmetric risk of BizSkills failure (which triggers LAP and significantly reduces your final rating regardless of technical performance) makes BizSkills protection the highest priority. This does not mean spending more time on BizSkills than on technical content. It means ensuring a minimum level of BizSkills preparation that puts you safely above the 65% pass threshold.
For freshers with strong English communication, this may require minimal additional effort. For freshers with weaker English skills, this requires daily practice starting before ILP: speaking English aloud, reading English content, writing practice emails, and engaging in English conversation with batchmates during ILP.
Priority 2: Maximize Diagnostic Scores Through Daily Consistency
Diagnostics carry the highest cumulative weight and are the most directly controllable component. The strategy is simple: complete each iON module on the day it is assigned, take module quizzes seriously, and prepare specifically for each diagnostic using the section content and past batch question patterns.
Consistency is key. Aim for every diagnostic score to be above 80% rather than accepting a mix of high and low scores. The freshers who maintain a consistent 80+ average across all diagnostics are the ones who achieve the highest diagnostic contribution to their final rating.
Priority 3: Prepare Holistically for the PRA
The PRA tests integrated understanding across the full curriculum. Begin PRA preparation in Week 3 of Phase 1 by creating connections between topics as you learn them. Do not wait until the PRA is imminent to start integrating concepts.
Create a revision strategy that reviews the entire curriculum in the final week before the PRA, spending proportionally more time on areas where your diagnostic scores were weakest. The PRA rewards breadth of understanding, so ensuring there are no major knowledge gaps is more important than deepening expertise in topics you already understand well.
Priority 4: Excel During the Project Phase
The project evaluation is the second-highest-weighted component and the one most influenced by behaviors rather than knowledge. Volunteer for meaningful tasks, contribute code and documentation, communicate proactively, help teammates, and prepare thoroughly for the final presentation.
The project phase is also the component where team dynamics affect individual ratings. Being on a strong team helps your rating. But being a visible, contributing member of any team, even one that struggles, protects your individual evaluation.
Priority 5: Score Well on IRAs
While IRAs carry the lowest weight, they set the trajectory for your cumulative score. Strong IRA scores create a positive baseline that reduces pressure on later assessments. Weak IRA scores create a deficit that requires higher subsequent scores to compensate.
The effort required for strong IRA scores is modest: thorough Aspire and Tech Lounge completion. Given the low effort relative to the positive baseline they create, IRA preparation is a high-return investment.
How Ratings Affect Project Allocation
The most tangible, immediate impact of your ILP rating is on project allocation after training. Understanding this connection helps you appreciate why the rating matters beyond ILP itself.
The RMG Process
After ILP, your profile (including your ILP rating, stream, certifications, base branch location, and any other relevant information) is managed by the Resource Management Group (RMG). When a project team needs additional resources, they submit a requirement to RMG specifying the skills needed, the experience level, and the location.
RMG matches available freshers to these requirements based on multiple factors: technical skills (your stream), location (your base branch), availability (you are on the bench), and capability indicators (your ILP rating being the primary one for freshers with no project history).
The Rating Advantage
Higher-rated freshers are presented to project teams first. When multiple freshers match a project requirement, RMG sends the highest-rated candidates for the interview. This means higher-rated freshers see more project opportunities, get interviewed more frequently, and have more choices about which project to accept.
The practical difference is significant. A top-rated fresher might receive three or four project interview opportunities in their first two weeks on the bench. A lower-rated fresher might wait four to six weeks for a single opportunity. Over the course of a career, this initial velocity difference compounds: earlier project deployment means earlier performance ratings, earlier promotion eligibility, and earlier salary growth.
Alumni from past batches consistently confirm this dynamic. One alumnus who received a 4.5 rating described receiving multiple project opportunities within days of ILP completion. Another who received a 3.0 rating described waiting six weeks for a project interview and having to accept the first opportunity offered regardless of technology or preference.
Base Branch Flexibility
Freshers with higher ILP ratings report greater flexibility in base branch allocation. While base branch is primarily determined by project requirements and organizational needs, top-rated freshers sometimes receive accommodation for location preferences because their capability makes them desirable resources that project teams in the preferred location are willing to accept.
This is not a formal policy. It is a practical reality reported by alumni. When a top-rated fresher requests a base branch change and a suitable project exists in the requested location, the request is more likely to be accommodated than for a lower-rated fresher.
The Long-Term Fade
An important nuance: the ILP rating’s influence fades over time. Once you are on a project and building a performance track record through quarterly reviews and annual appraisals, your on-project performance becomes the dominant career metric. After two to three years, very few people reference your ILP rating.
This means that a poor ILP rating is not a career-ending outcome. It creates a slower start, but consistent strong performance on projects can compensate for and eventually eclipse a weak ILP beginning. Conversely, a strong ILP rating does not guarantee long-term success if on-project performance does not match.
The ILP rating opens the first door. What you do after walking through it determines everything else.
Real Rating Stories: What Happened to Freshers at Different Rating Levels
To make the impact of ILP ratings concrete, here are anonymized real experiences reported by alumni from different rating ranges.
The 4.6 Fresher (Java Stream, Chennai)
This alumnus completed Aspire three weeks before joining, scored 85 on IRA1 and 72 on IRA2, maintained diagnostic scores between 82 and 91 across all sections, scored 88 on the PRA, received strong marks on both BizSkills assessments (speaking and writing), and was the team lead during the project phase with a well-received final presentation.
Post-ILP experience: received four project interview calls within the first week of bench period. Chose a Java development project for a major banking client in the same city as the base branch. Started contributing code within three weeks of project onboarding. Received a “B” band (strong performance) in the first annual appraisal. Earned Oracle Java Certification within eight months of joining. Was considered for the next promotion cycle after just 18 months.
The 3.7 Fresher (Python Stream, Hyderabad)
This alumnus completed Aspire adequately but rushed through Tech Lounge, scored 62 on IRA1 (barely passed) and 48 on IRA2, had diagnostic scores ranging from 68 to 84 (inconsistent), scored 73 on the PRA, and performed well on BizSkills (78 average). The project phase contribution was solid but not exceptional.
Post-ILP experience: received the first project interview call after two weeks on the bench. Was allocated to a data analytics support project that was not the alumnus’s first choice but provided good learning opportunities. Performance on the project was strong, leading to a “B” band in the first appraisal. Within two years, the alumnus had moved to a preferred data science role based on project performance and an AWS Machine Learning certification.
The key takeaway: the 3.7 rating created a slower start (two weeks instead of one week for the first project call) and fewer initial choices, but the long-term career trajectory was positive because on-project performance was strong.
The 2.8 Fresher (ITIS Stream, Trivandrum)
This alumnus arrived at ILP without completing Aspire fully, scored 48 on IRA1 (failed, passed on the re-attempt with 58), scored 35 on IRA2, had inconsistent diagnostic scores (several re-dos), was placed in LAP for BizSkills (writing assessment failure), and the extended training reduced the overall rating significantly.
Post-ILP experience: waited five weeks on the bench before receiving a project interview call. Was allocated to an infrastructure support project with limited technology exposure. The first annual appraisal was “C” band (below expectations) partly due to the slow start and limited project scope. However, the alumnus used the wake-up call constructively: earned CCNA certification during the second year, transitioned to a network engineering role, and received a “B” band in the second appraisal.
The key takeaway: the 2.8 rating created a genuinely difficult first year with a longer bench, fewer choices, and a less desirable project. But the career was recoverable through certifications and strong subsequent performance. The alumnus describes ILP as “the hardest lesson I ever learned” and credits the experience with motivating the disciplined approach that defined the subsequent career.
The Lesson Across All Three Stories
The rating matters most in the first 6 to 12 months. It determines the speed of your deployment, the quality of your first project, and the initial trajectory of your career. After the first year, on-project performance takes over as the primary career driver. But the first year matters because it sets the foundation: a strong first project provides better learning, better mentoring, and better material for your first appraisal. A weak first project provides fewer opportunities to demonstrate capability.
The optimal strategy is to pursue the highest possible ILP rating to maximize your first-year advantage, and then sustain the same discipline on your project to convert that advantage into long-term career momentum.
Frequently Asked Questions About ILP Ratings
Can I see my ILP rating?
Yes. Your ILP rating is communicated to you at the end of ILP, typically on the last day or two. It is also visible in your Ultimatix profile, where it remains as part of your employee record. Project managers reviewing your profile during the allocation process can see this rating.
Is the ILP rating the same as the annual appraisal rating?
No. The ILP rating is specific to your training period and is calculated using the components described in this guide. Annual appraisal ratings (A, B, C bands) are separate performance metrics based on your project contributions, client feedback, and manager evaluations during each annual review cycle. They are related only in the sense that the habits that produce a strong ILP rating tend to produce strong appraisal ratings as well.
Can I appeal my ILP rating if I think it is incorrect?
There is no formal appeal process for ILP ratings in most batches. The rating is calculated from objective assessment scores (which are recorded on the iON platform and verifiable) and subjective faculty evaluations (which are based on observations throughout ILP). If you believe a specific assessment score was recorded incorrectly, you can raise a query through Ultimatix or with your ILP manager. However, subjective evaluation components are not appealable.
Does the ILP rating affect my salary?
Not directly. All freshers in the same cadre (Ninja or Digital) start at the same CTC regardless of ILP rating. The rating affects project allocation, which indirectly affects salary trajectory: better projects provide better learning and better appraisal outcomes, which lead to higher increments and faster promotions. But there is no direct salary adjustment based on ILP rating.
How long does the ILP rating remain visible on my profile?
The ILP rating remains on your Ultimatix employee record indefinitely. However, its practical relevance diminishes rapidly after the first year. Once you have annual appraisal ratings on your profile, those become the reference points that managers and RMG staff focus on. By your second or third year, the ILP rating is historical data rather than a decision-making factor.
Is the rating curved or absolute?
This varies by batch. Some batches appear to use absolute scoring (your rating is purely based on your own scores regardless of how others performed). Others appear to use some form of normalization or relative positioning within the batch. Alumni reports are inconsistent on this point, suggesting that the approach may differ between batches or centers.
The practical advice is to focus on maximizing your own scores rather than worrying about how your peers are performing. Whether the rating is absolute or relative, higher scores on your own assessments produce a higher rating.
What if I was in LAP? How badly does it affect my rating?
LAP placement significantly reduces your final rating. The extended training period is documented on your profile, and the assessment failures that triggered LAP are reflected in the cumulative score. Alumni who were in LAP and subsequently recovered through strong performance report ratings in the 2.5 to 3.2 range, which is below the batch average.
However, LAP is not the end of the story. The alumnus stories earlier in this guide demonstrate that a LAP-affected rating creates a difficult first year but does not prevent a successful career. The determining factor is what you do after LAP: do you let the setback define you, or do you use it as motivation for stronger performance?
Does the type of stream affect the rating scale?
No. The rating scale is consistent across all streams within a batch. A 4.0 in Java is equivalent to a 4.0 in ITIS or SAP. The assessments are stream-specific (different questions), but the scoring methodology and scale are standardized. This ensures fairness in the project allocation process, where freshers from different streams compete for opportunities based on comparable metrics.
What Top-Rated Freshers Do Differently
Across hundreds of alumni accounts, the freshers who achieve the highest ILP ratings share a set of common behaviors that are consistently different from their lower-rated peers.
They Arrive Prepared
Top-rated freshers complete Aspire and Tech Lounge thoroughly before joining. They arrive at ILP with a solid foundation that makes IRAs straightforward and the first weeks of technical training feel like revision rather than new learning. This head start creates positive momentum that carries through the entire training period.
They Are Daily Consistent
Top-rated freshers complete every iON module on the day it is assigned, take every quiz, and prepare specifically for every diagnostic. They do not have a single outstanding module by the weekend. Their preparation is steady and predictable rather than bursty and reactive.
They Invest in BizSkills from Day 1
Top-rated freshers speak English with batchmates during breaks and meals, even when their mother tongue would be more comfortable. They participate actively in BizSkills sessions. They practice email writing on their own time. They treat BizSkills as equally important to technical preparation, and their BizSkills assessment scores reflect this investment.
They Are Visible Contributors During the Project Phase
Top-rated freshers volunteer for meaningful tasks during the project phase. They write code, they write documentation, they help teammates with debugging, and they prepare thoroughly for the final presentation. Faculty remember them as active, positive contributors, which is reflected in the subjective component of the project evaluation.
They Maintain Professional Behavior Throughout
Top-rated freshers are punctual, well-groomed, respectful, and engaged. They arrive early to sessions, they ask thoughtful questions, they help struggling batchmates, and they follow the code of conduct without exception. These behaviors create the positive faculty impression that influences the subjective assessment component.
They Build Relationships Broadly
Top-rated freshers eat lunch with different people, form study groups with cross-stream batchmates, and build relationships beyond their immediate circle. This broad social engagement is noticed by faculty and contributes to the perception of the fresher as a well-rounded professional rather than a narrow technician.
Common Mistakes That Lower Ratings
The flip side of what top performers do right is what lower performers do wrong. These mistakes are consistently reported by alumni as the behaviors that dragged ratings down. Each mistake is connected to a specific rating component, so you can see exactly where the damage occurs.
Inconsistent Module Completion (Affects: Diagnostics, 25-35% weight)
Falling behind on iON modules and cramming before diagnostics produces variable diagnostic scores. As discussed earlier, variability is worse for the rating than consistent moderate performance because it signals unreliable engagement. A fresher who scores 95, 65, 88, 67, 91, 66 on six diagnostics has the same average as a fresher who scores 79, 78, 80, 77, 81, 78, but the consistent performer likely receives a higher rating contribution because the consistency signals reliability.
The root cause is always the same: skipping daily module completion and hoping to catch up later. The fix is equally clear: complete each day’s module that day. No exceptions. No “I will do it tomorrow.” This single discipline prevents the most common cause of rating-reducing diagnostic variability.
BizSkills Neglect (Affects: BizSkills, 15-20% weight, plus LAP risk)
Freshers who invest all their effort in technical preparation and neglect BizSkills preparation risk the single worst rating outcome: LAP placement for BizSkills failure. Even without LAP, a poor BizSkills score drags down the overall rating disproportionately because it signals a professional skill gap that TCS takes seriously.
The insidious aspect of BizSkills neglect is that it does not feel like a problem until the assessment. Technical gaps produce low diagnostic scores that create early warning signals. BizSkills gaps produce no warning signals until the speaking or writing assessment, at which point it is too late to develop skills that require weeks of practice.
The prevention is daily English practice starting before ILP. The freshers who arrive at ILP already comfortable speaking English in professional contexts are immune to the BizSkills risk. The freshers who wait until ILP to start practicing are gambling with one of the highest-consequence assessment components.
Passive Project Phase Behavior (Affects: Project Evaluation, 20-25% weight)
Freshers who let stronger teammates handle the project work while they contribute minimally receive weak individual project evaluations even if the team project is strong. The project evaluation includes an individual contribution dimension that faculty assess through direct observation over the three to four weeks of Phase 2.
Faculty can distinguish between team members who actively write code, debug integration issues, and contribute to documentation versus those who attend meetings but contribute nothing between them. The distinction is reflected in individual evaluation scores that feed directly into the overall rating.
The prevention is active contribution from Day 1 of the project phase. Volunteer for a specific module. Write code for it. Test it. Document it. Present it during the final review. Even if your code is not the most elegant on the team, the act of genuine contribution is valued over passive presence.
Negative Professional Behaviors (Affects: Subjective Assessment, 5-10% weight)
Habitual lateness, phone usage during sessions, disengagement during BizSkills activities, skipping optional sessions, and poor interpersonal behavior are all observed by faculty and negatively affect the subjective assessment component. These behaviors cost marks on a component that requires zero technical effort to maximize. It is the most preventable source of rating reduction.
The subjective component may seem small at 5 to 10 percent, but on a 5-point scale, even a half-point reduction can move you from one rating tier to another. A fresher who would have received a 4.0 based on assessment scores alone might receive a 3.7 after negative subjective assessment adjustments. That difference can be the threshold between priority project allocation and standard allocation.
Not Asking for Help Early (Affects: Diagnostics and PRA, 40-55% combined weight)
Freshers who struggle with concepts but do not seek help until after failing a diagnostic miss the opportunity to address gaps proactively. The gap persists, affects the next diagnostic (because concepts build on each other), and compounds into a PRA weakness.
Instructors respect freshers who ask questions and seek clarification. The act of seeking help is itself a positive professional behavior that contributes to the subjective assessment. There is no downside to asking for help and significant downside to struggling silently.
Treating the Project Phase Presentation as an Afterthought (Affects: Project Evaluation, 20-25% weight)
The final project presentation is a high-visibility moment that carries significant evaluation weight. Teams that rehearse their presentation once (or not at all) frequently encounter timing problems, demo failures, and unprepared answers to faculty questions. The resulting poor presentation undermines the project evaluation for the entire team, dragging down individual ratings even for members whose code and documentation contributions were strong.
The fix: rehearse three times minimum, test the demo on actual presentation equipment, and prepare answers for the most likely faculty questions. An hour of presentation preparation protects a 20 to 25 percent rating component.
Understanding the Rating in the Context of Your TCS Career
The ILP rating exists within a broader performance management framework that governs your entire TCS career. Understanding where it fits helps you appreciate both its importance and its limitations.
The TCS Performance Hierarchy
Your career at TCS is measured through a series of progressive performance metrics.
During ILP: the ILP rating (the subject of this guide) measures your training performance across assessments, projects, and professional behavior. It is the metric that governs your first project allocation.
During your first year: quarterly project performance reviews and the first annual appraisal (A, B, C bands) measure your on-project contribution. These reviews are conducted by your project manager based on deliverable quality, client feedback, teamwork, and technical growth. The first appraisal determines your first salary increment.
During subsequent years: annual appraisals continue to measure performance, and promotion decisions are based on cumulative appraisal history, certifications, leadership contributions, and organizational need. The promotion cycle moves you from ASE to System Engineer to IT Analyst and beyond.
The ILP rating is the first metric in this hierarchy. It is important because it sets your initial trajectory, but it is one data point in a career-long sequence of evaluations. The habits you build to achieve a strong ILP rating (daily discipline, balanced skill investment, professional behavior, team collaboration) are the same habits that produce strong quarterly reviews and annual appraisals. The rating system changes, but the success formula remains constant.
How ILP Ratings Compare to Industry Norms
TCS is not unique in evaluating its training programs. Infosys, Wipro, HCL, and other large IT services companies have comparable training programs with their own rating systems. What is distinctive about TCS ILP is the comprehensiveness of the evaluation: technical assessments, BizSkills assessments, PRA, project evaluation, and subjective assessment combine into a multi-dimensional metric that is more nuanced than a simple test score.
This comprehensiveness is actually an advantage for well-prepared freshers. A one-dimensional scoring system (like a single final exam) means your rating depends on your performance on one day. A multi-dimensional system means your rating reflects consistent behavior across 60 days, which is a more accurate and more forgiving measure of your true capability.
Your Rating Optimization Checklist
Here is a concise, actionable plan for maximizing your ILP rating across every component. Each item is mapped to the specific rating component it influences.
Before ILP (Influences: IRA scores 5-10%, BizSkills 15-20%)
Complete Aspire modules thoroughly, taking every quiz and reviewing incorrect answers. Target 3,500+ Miles. Complete Tech Lounge modules with hands-on practice alongside the reading material. Practice spoken and written English daily for at least 20 minutes. If English is not your first language, increase this to 40 minutes. Set up a development environment for your stream (IDE for programming streams, virtual machines for ITIS).
Use the TCS ILP Preparation Guide on ReportMedic for structured practice with IRA-aligned questions. Arrive at ILP targeting 80+ on IRA1 and 65+ on IRA2. These scores create a positive baseline that reduces pressure on every subsequent assessment.
Week 1 (Influences: IRA scores 5-10%, subjective assessment 5-10%)
Clear IRA1 and IRA2 with the strongest possible scores. Establish the daily module completion habit from Day 1. Begin speaking English with batchmates during all informal interactions, including meals, breaks, and hostel time. Introduce yourself to faculty by name. Arrive 10 minutes early to every session. Complete Ultimatix setup (bank details, PAN, address) before the 20th of the month to avoid salary delays.
Weeks 2-4 (Influences: Diagnostics 25-35%, BizSkills 15-20%, PRA 15-20%)
Maintain consistent diagnostic scores above 80% through daily module completion and targeted preparation before each diagnostic. Participate actively in every BizSkills session: volunteer for presentations, engage in group discussions, practice email writing during BizSkills homework assignments. Begin connecting concepts across sections in preparation for the PRA: create cross-section concept maps, identify how topics from different sections relate to each other.
Form a study group of three to five batchmates from different backgrounds. Meet daily for 60 to 90 minutes after dinner. Take turns explaining concepts and quizzing each other on diagnostic material.
Weeks 4-5 (Influences: PRA 15-20%, BizSkills 15-20%)
Prepare for the PRA holistically. Review the entire Phase 1 curriculum in a structured revision: one section per evening, with focus on cross-cutting connections. Spend extra revision time on sections where your diagnostic scores were weakest. Practice with PRA-style integrated questions that combine multiple concepts in a single scenario.
Clear BizSkills speaking and writing assessments with comfortable margins above 65%. If your practice indicates you are close to the boundary, invest additional time in BizSkills preparation rather than pushing technical preparation beyond adequacy. The asymmetric risk of BizSkills failure (which triggers LAP and devastates the rating) makes protecting this component the highest-priority use of marginal preparation time.
Weeks 5-8 (Project Phase) (Influences: Project Evaluation 20-25%, Subjective Assessment 5-10%)
Volunteer for meaningful tasks within the project team. If you are ready for it, volunteer for the team lead role. Contribute visibly to code, documentation, and team coordination throughout the phase, not just during the final week. Help teammates who are struggling with their modules. Communicate proactively about progress and blockers during team meetings and status reports.
Prepare the final presentation with at least three full rehearsals. Ensure every team member can present confidently and can answer faculty questions about any part of the project. Test the live demo on actual presentation equipment at least once before presentation day.
Throughout ILP (Influences: Subjective Assessment 5-10%, all other components indirectly)
Arrive on time to every session. Every single one. No exceptions. Participate actively in all activities including optional sessions and IQLASS video conferences. Help batchmates who are struggling with concepts, especially non-CS freshers who need programming guidance. Maintain professional appearance, grooming, and behavior at all times. Build relationships across streams and backgrounds through varied lunch companions and cross-stream study interactions. Follow the campus code of conduct without exception.
How to Recover from a Weak Start
Not every fresher enters ILP with perfect preparation. If you have weak IRA scores, an early diagnostic failure, or a BizSkills scare, the rating is not lost. Recovery is possible through a specific strategy.
After Weak IRA Scores
A weak IRA score creates a small deficit in the cumulative calculation. The recovery strategy is to overperform on the first two or three diagnostics. Each diagnostic carries more weight than an IRA, so consistently strong diagnostic scores quickly compensate for weak IRA results. Freshers who scored below 60 on IRA1 but scored 85+ on the first three diagnostics report that their final rating was not significantly affected by the IRA weakness.
After a Diagnostic Failure and Re-Do
A single diagnostic failure followed by a successful re-do creates a modest negative impact. The recording of the failure and the lower re-do score (even if you pass) pull down the diagnostic average. The recovery strategy is to ensure every subsequent diagnostic score is above 80%, which raises the average and reduces the impact of the single failure.
Do not let a single failure create a psychological spiral. One poor score out of eight or ten diagnostics is recoverable. The danger is not the single failure itself but the demoralization that leads to reduced effort on subsequent assessments.
After BizSkills Concerns
If your BizSkills mid-term assessment or informal feedback suggests you are near the failure threshold, treat this as an emergency. The stakes are the highest of any single assessment because failure triggers LAP, which is the single most damaging event for your ILP rating.
Dedicate 30 additional minutes per day to English speaking practice. Seek specific feedback from the BizSkills faculty by approaching them after a session and asking directly: “What exactly do I need to improve to pass the assessment?” Most faculty are happy to provide targeted guidance because they want you to succeed. Practice the specific skills they identify rather than doing generic English practice.
Ask a strong English-speaking batchmate to be your practice partner. Practice the specific format of the upcoming assessment: if it is a presentation, practice presenting. If it is a professional conversation, practice having structured conversations on workplace topics. If it is a writing assessment, write practice emails and have your partner review them for grammar, tone, and clarity.
The BizSkills assessment can usually be prepared for more effectively in two to three focused weeks than most freshers realize. The freshers who fail are almost always those who do not practice at all, not those who practice diligently and fall slightly short. The bar is 65%, not 95%. With targeted daily practice, most freshers can comfortably clear this threshold regardless of their starting English proficiency level.
One alumnus who was flagged as at-risk for BizSkills failure in Week 2 but ultimately scored 72% on the speaking assessment described the recovery: “My LG instructor told me privately that my speaking needed work. I was terrified. For the next three weeks, I spoke English exclusively, even in the hostel, even on phone calls with friends. I practiced presenting in front of the mirror every night. By the time the assessment came, I was nervous but prepared. The 72% was not a brilliant score, but it was safe. And it meant I avoided LAP, which would have dropped my overall rating by at least a full point.”
The Bigger Picture: Ratings as a Career Foundation
Your ILP rating is important, but it is important as a starting point, not as a permanent label. The freshers who achieve the highest ratings gain an initial advantage: faster project deployment, more choices, and a positive first impression. But the advantage is temporary. Within two years, your on-project performance, your certifications, your client relationships, and your leadership contributions determine your career trajectory far more than any ILP metric.
The real value of pursuing a strong ILP rating is not the number itself. It is the habits you build to achieve it. Daily discipline, consistent preparation, balanced investment across technical and communication skills, professional behavior, team collaboration, and strategic effort allocation are the same habits that produce strong project performance, strong appraisal ratings, and strong career progression throughout your time at TCS and beyond.
The freshers who build these habits during ILP carry them into their careers. The ILP rating was just the first measure. The career it enables is the real reward.
Prepare for every component. Pursue consistency over brilliance. Invest in both technical and communication skills. Engage professionally throughout the 60 days. And when the rating arrives, it will reflect the effort you invested, the habits you built, and the professional you are becoming. That reflection is worth pursuing not just for the number but for who it makes you in the process.