YouTube has grown into one of the world’s largest and most profitable platforms for digital creators, offering people the chance to turn their creativity into a full-time career. Every day, millions of videos are uploaded across categories like entertainment, technology, education, gaming, and lifestyle. With such massive reach, YouTube has become a key source of income for influencers, vloggers, and businesses. However, understanding how much YouTube pays for videos or views depends on various factors. Many new YouTubers often wonder how much the platform actually pays per 1,000 views, as earnings can vary widely. The amount depends on factors like video content type, viewer location, ad engagement, and the overall demand from advertisers within that niche. How YouTube Earnings Work Add Zee News as a Preferred Source YouTube pays creators through its YouTube Partner Program (YPP). To join the program, a channel must have at least 1,000 subscribers and 4,000 valid watch hours in the past 12 months. Once approved, creators can start earning money through ads that appear on their videos. The payment is calculated based on CPM (Cost Per Mille), which means the amount advertisers pay per 1,000 ad impressions. However, creators don’t receive the full CPM amount, YouTube keeps about 45% of the ad revenue, while the remaining 55% goes to the creator. (Also Read: GTA 6 Delayed Again — Fans Disappointed As Launch Pushed To November 2026) Average YouTube Pay per 1,000 Views The amount YouTube pays per 1,000 views varies widely depending on several factors such as country, content type, audience demographics, and engagement. On average, creators can earn between $0.50 and $5 per 1,000 views. Entertainment and Vlogs: $0.50 – $2 per 1,000 views Tech and Gadgets: $2 – $4 per 1,000 views Finance and Business: $5 – $10 per 1,000 views Education and Tutorials: $1 – $4 per 1,000 views Channels focusing on financial advice, business tips, or digital marketing earn more because advertisers in those categories pay higher rates. In contrast, general entertainment channels usually have lower ad rates due to broad audiences and less targeted ads. YouTube Earning Calculator A YouTube Earning Calculator is an online tool that helps estimate how much a creator might earn from their videos. Users simply enter the number of views, estimated CPM, and engagement rate to get an approximate earning figure. For example, if a channel gets 100,000 views with a CPM of $3, the total revenue would be around $300 before YouTube’s share. After YouTube takes its 45% cut, the creator would earn approximately $165. While this tool gives a helpful estimate, the actual amount can differ depending on ad availability, viewer location, and the percentage of viewers who watch ads instead of skipping them. (Also Read: GTA 6 Trailer Release: Ahead Of Much-Hyped Launch, YouTube Tightens Violent Game Rules – All You Need To Know) Other Ways Creators Earn on YouTube Apart from ad revenue, many creators earn money through: Channel memberships Super Chat and Super Stickers (during live streams) Brand sponsorships and collaborations Affiliate marketing Merchandise sales YouTube does not pay a fixed amount for per 1,000 views. The earnings depend on the content category, viewer engagement, and location. Using a YouTube Earning Calculator can help estimate potential income, but real earnings vary from channel to channel. For creators, focusing on quality content and building an engaged audience generate more revenue compared to others.
iPhone 17e, iPhone 18 And More: Apple Is Likely To Launch THESE Products Next Year | Technology News
Apple 2026 Expected Product Lineup: Apple is preparing for one of its busiest years ever in 2026. According to media reports, the company is expected to launch at least 15 new products across its popular device lineup next year. This includes new iPhones, iPads, Macs, Apple Watches and even smart home gadgets. Apple will reportedly introduce a new iPhone 17e, a more affordable model in the iPhone 17 family. Additionally, Apple is expected to launch the 12th-generation iPad powered by the A18 chip and a new iPad Air running on the M4 chip. Both models are expected to bring faster performance and better battery efficiency. Mac fans also have plenty to look forward to. According to reports, Apple is planning a new MacBook Air with the M5 chip, while the MacBook Pro lineup will feature the more powerful M5 Pro and M5 Max versions. The company may also launch new external displays, continuing to expand its professional-grade screen lineup. Add Zee News as a Preferred Source Around March or April 2026, Apple is expected to roll out a revamped Siri with AI-powered upgrades. Later in the year, Apple may launch the Apple Watch Series 12 and the iPhone 18 series. The iPhone 18 Pro models are expected to use Apple’s new C1 modem, marking a shift away from Qualcomm chips. There are also growing rumours about Apple’s first foldable iPhone. The reports suggest that Apple is also planning to refresh several other devices, including smart home security products, a Mac mini with M5 chip, an updated Mac Studio, and an iPad mini with an OLED display.
GTA 6 Delayed Again — Fans Disappointed As Launch Pushed To November 2026 | Technology News
Gta 6 Release Date: Rockstar Games has officially confirmed that Grand Theft Auto 6 (GTA 6) will not be arriving as early as fans had hoped. The highly anticipated open-world game has been delayed by six months, with its new release date set for November 19, 2026. Originally scheduled to launch on May 26, 2026, GTA 6’s delay had already been the subject of online speculation and leaks. Rockstar made the announcement early Friday, confirming what many gamers had feared — another setback in the wait for one of the most anticipated titles in gaming history. In a statement, Rockstar Games apologised to fans for the delay and explained that the extra development time is needed to ensure the game meets the studio’s high standards. “We are sorry for adding additional time to what we realize has been a long wait, but these extra months will allow us to finish the game with the level of polish you have come to expect and deserve,” the company said. Add Zee News as a Preferred Source Grand Theft Auto VI will now release on Thursday, November 19, 2026. We are sorry for adding additional time to what we realize has been a long wait, but these extra months will allow us to finish the game with the level of polish you have come to expect and… pic.twitter.com/yLX9KIiDzX Rockstar Games (RockstarGames) November 6, 2025 GTA 6, the next major entry in the blockbuster franchise, will be released on PlayStation 5 and Xbox Series X|S consoles. While Rockstar has not confirmed a PC release yet, reports suggest it could arrive several months after the console version, similar to past releases. (Also Read: iPhone 17e, iPhone 18 And More: Apple Is Likely To Launch THESE Products Next Year) The game is expected to feature a massive open world inspired by a fictional version of Miami (Vice City) and will reportedly include two main protagonists — a male and a female character, a first for the series. Despite the disappointment among fans, many users on the internet believe that Rockstar’s decision to delay the game could lead to a more polished and immersive experience. After all, the company’s previous titles like GTA V and Red Dead Redemption 2 were both delayed before release, which became some of the most successful games ever made.
7 Advanced Feature Engineering Tricks for Text Data Using LLM Embeddings
7 Advanced Feature Engineering Tricks for Text Data Using LLM EmbeddingsImage by Editor Introduction Large language models (LLMs) are not only good at understanding and generating text; they can also turn raw text into numerical representations called embeddings. These embeddings are useful for incorporating additional information into traditional predictive machine learning models—such as those used in scikit-learn—to improve downstream performance. This article presents seven advanced Python examples of feature engineering tricks that add extra value to text data by leveraging LLM-generated embeddings, thereby enhancing the accuracy and robustness of downstream machine learning models that rely on text, in applications such as sentiment analysis, topic classification, document clustering, and semantic similarity detection. Common setup for all examples Unless stated otherwise, the seven example tricks below make use of this common setup. We rely on Sentence Transformers for embeddings and scikit-learn for modeling utilities. !pip install sentence-transformers scikit-learn -q from sentence_transformers import SentenceTransformer import numpy as np # Load a lightweight LLM embedding model; builds 384-dimensional embeddings model = SentenceTransformer(“all-MiniLM-L6-v2”) !pip install sentence–transformers scikit–learn –q from sentence_transformers import SentenceTransformer import numpy as np # Load a lightweight LLM embedding model; builds 384-dimensional embeddings model = SentenceTransformer(“all-MiniLM-L6-v2”) 1. Combining TF-IDF and Embedding Features The first example shows how to jointly extract—given a source text dataset like fetch_20newsgroups—both TF-IDF and LLM-generated sentence-embedding features. We then combine these feature types to train a logistic regression model that classifies news texts based on the combined features, often boosting accuracy by capturing both lexical and semantic information. from sklearn.datasets import fetch_20newsgroups from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import StandardScaler # Loading data data = fetch_20newsgroups(subset=”train”, categories=[‘sci.space’, ‘rec.autos’]) texts, y = data.data[:500], data.target[:500] # Extracting features of two broad types tfidf = TfidfVectorizer(max_features=300).fit_transform(texts).toarray() emb = model.encode(texts, show_progress_bar=False) # Combining features and training ML model X = np.hstack([tfidf, StandardScaler().fit_transform(emb)]) clf = LogisticRegression(max_iter=1000).fit(X, y) print(“Accuracy:”, clf.score(X, y)) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 from sklearn.datasets import fetch_20newsgroups from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import StandardScaler # Loading data data = fetch_20newsgroups(subset=‘train’, categories=[‘sci.space’, ‘rec.autos’]) texts, y = data.data[:500], data.target[:500] # Extracting features of two broad types tfidf = TfidfVectorizer(max_features=300).fit_transform(texts).toarray() emb = model.encode(texts, show_progress_bar=False) # Combining features and training ML model X = np.hstack([tfidf, StandardScaler().fit_transform(emb)]) clf = LogisticRegression(max_iter=1000).fit(X, y) print(“Accuracy:”, clf.score(X, y)) 2. Topic-Aware Embedding Clusters This trick takes a few sample text sequences, generates embeddings using the preloaded language model, applies K-Means clustering on these embeddings to assign topics, and then combines the embeddings with a one-hot encoding of each example’s cluster identifier (its “topic class”) to build a new feature representation. It is a useful strategy for creating compact topic meta-features. from sklearn.cluster import KMeans from sklearn.preprocessing import OneHotEncoder texts = [“Tokyo Tower is a popular landmark.”, “Sushi is a traditional Japanese dish.”, “Mount Fuji is a famous volcano in Japan.”, “Cherry blossoms bloom in the spring in Japan.”] emb = model.encode(texts) topics = KMeans(n_clusters=2, n_init=”auto”, random_state=42).fit_predict(emb) topic_ohe = OneHotEncoder(sparse_output=False).fit_transform(topics.reshape(-1, 1)) X = np.hstack([emb, topic_ohe]) print(X.shape) from sklearn.cluster import KMeans from sklearn.preprocessing import OneHotEncoder texts = [“Tokyo Tower is a popular landmark.”, “Sushi is a traditional Japanese dish.”, “Mount Fuji is a famous volcano in Japan.”, “Cherry blossoms bloom in the spring in Japan.”] emb = model.encode(texts) topics = KMeans(n_clusters=2, n_init=‘auto’, random_state=42).fit_predict(emb) topic_ohe = OneHotEncoder(sparse_output=False).fit_transform(topics.reshape(–1, 1)) X = np.hstack([emb, topic_ohe]) print(X.shape) 3. Semantic Anchor Similarity Features This simple strategy computes similarity to a small set of fixed “anchor” (or reference) sentences used as compact semantic descriptors—essentially, semantic landmarks. Each column in the similarity-feature matrix contains the similarity of the text to one anchor. The main value lies in allowing the model to learn relationships between the text’s similarity to key concepts and a target variable—useful for text classification models. from sklearn.metrics.pairwise import cosine_similarity anchors = [“space mission”, “car performance”, “politics”] anchor_emb = model.encode(anchors) texts = [“The rocket launch was successful.”, “The car handled well on the track.”] emb = model.encode(texts) sim_features = cosine_similarity(emb, anchor_emb) print(sim_features) from sklearn.metrics.pairwise import cosine_similarity anchors = [“space mission”, “car performance”, “politics”] anchor_emb = model.encode(anchors) texts = [“The rocket launch was successful.”, “The car handled well on the track.”] emb = model.encode(texts) sim_features = cosine_similarity(emb, anchor_emb) print(sim_features) 4. Meta-Feature Stacking via Auxiliary Sentiment Classifier For text associated with labels such as sentiments, the following feature-engineering technique adds extra value. A meta-feature is built as the prediction probability returned by an auxiliary classifier trained on the embeddings. This meta-feature is stacked with the original embeddings, resulting in an augmented feature set that can improve downstream performance by exposing potentially more discriminative information than raw embeddings alone. A slight additional setup is needed for this example: !pip install sentence-transformers scikit-learn -q from sentence_transformers import SentenceTransformer from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import StandardScaler # Import StandardScaler import numpy as np embedder = SentenceTransformer(“all-MiniLM-L6-v2”) # 384-dim # Small dataset containing texts and sentiment labels texts = [“I love this!”, “This is terrible.”, “Amazing quality.”, “Not good at all.”] y = np.array([1, 0, 1, 0]) # Obtain embeddings from the embedder LLM emb = embedder.encode(texts, show_progress_bar=False) # Train an auxiliary classifier on embeddings X_train, X_test, y_train, y_test = train_test_split( emb, y, test_size=0.5, random_state=42, stratify=y ) meta_clf = LogisticRegression(max_iter=1000).fit(X_train, y_train) # Leverage the auxiliary model’s predicted probability as a meta-feature meta_feature = meta_clf.predict_proba(emb)[:, 1].reshape(-1, 1) # Prob of positive class # Augment original embeddings with the meta-feature # Do not forget to scale again for consistency scaler = StandardScaler() emb_scaled = scaler.fit_transform(emb) X_aug = np.hstack([emb_scaled, meta_feature]) # Stack features together print(“emb shape:”, emb.shape) print(“meta_feature shape:”, meta_feature.shape) print(“augmented shape:”, X_aug.shape) print(“meta clf accuracy on test slice:”, meta_clf.score(X_test, y_test)) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
100 5G Labs Set Up Across India To Boost 6G Research Ecosystem: Govt | Technology News
New Delhi: India has set up 100 5G labs across the country to develop use cases and enhance the 6G research and development ecosystem, the Department of Telecommunications (DoT) said on Wednesday. The government’s collaborative platform Bharat 6G Alliance has also signed 10 international collaborations with global 6G bodies, aiming for a 10 percent share of global 6G patents by 2030, an official statement said. Neeraj Mittal, Secretary (Telecom), made these comments as DoT led the thematic session on ‘Digital Communication’ at the Emerging Science, Technology and Innovation Conclave here. Mittal emphasised that it is the bedrock of all productive activity and that India’s telecom revolution has a direct bearing on national economic growth, adding that India has achieved one of the fastest 5G rollouts globally. The 100 5G labs will position the nation for leadership in 6G technologies, he said. Mittal highlighted that the Government’s approach to next-generation communication is multi-pronged, supporting research and development, encouraging domestic manufacturing, and building strong bridges between academia, industry, and government. Add Zee News as a Preferred Source He informed that over 100 R&D projects dedicated to 6G are currently being supported, with a focus on advancing Open RAN, indigenous chipsets, AI-based intelligent networks, and regulatory sandboxes to foster innovation. The event featured discussions on private networks and India’s telecom goals from industry leaders, and a panel discussion on advancing indigenous technologies. The panel also explored extending the 5G ecosystem in India, advancing indigenous PNT through the NavIC L1 signal, and building disruptive technology stacks from D2M to 6G. ‘ESTIC 2025’ took place from November 3 to 5, attracting over 3,000 participants from academia, research institutions, industry and government, along with Nobel laureates, eminent scientists, innovators and policymakers.
7 Machine Learning Projects to Land Your Dream Job in 2026
7 Machine Learning Projects to Land Your Dream Job in 2026Image by Editor Introduction machine learning continues to evolve faster than most can keep up with. New frameworks, datasets, and applications emerge every month, making it hard to know what skills will actually matter to employers. But this one thing never changes: projects speak louder than certificates. When hiring managers scan portfolios, they want to see real-world applications that solve meaningful problems, not just notebook exercises. The right projects don’t just show that you can code — they prove that you can think like a data scientist and build like an engineer. So if you want to stand out in 2026, these seven projects will help you do exactly that. 1. Predictive Maintenance for IoT Devices Manufacturers, energy providers, and logistics companies all want to predict equipment failure before it happens. Building a predictive maintenance model teaches you how to handle time-series data, feature engineering, and anomaly detection. You’ll work with sensor data, which is messy and often incomplete, so it’s a great way to practice real-world data wrangling. A good approach is to use Long Short-Term Memory (LSTM) networks or tree-based models like XGBoost to predict when a machine is likely to fail. Combine that with data visualization to show insights over time. This kind of project signals that you can bridge hardware and AI — an increasingly desirable skill as more devices become connected. If you want to take it further, create an interactive dashboard that shows predicted failures and maintenance schedules. This demonstrates not just your machine learning skills but also your ability to communicate results effectively. Dataset to get started: NASA C-MAPSS Turbofan Engine Degradation 2. AI-Powered Resume Screener Every company wants to save time on recruiting, and AI-based screening tools are already becoming standard. By building one yourself, you’ll explore natural language processing (NLP) techniques like tokenization, named entity recognition, and semantic search. This project combines text classification and information extraction — two critical subfields in modern machine learning. Start by collecting anonymized resumes or job postings from public datasets. Then, train a model to match candidates with roles based on skill keywords, project relevance, and even sentiment cues from descriptions. It’s an excellent demonstration of how AI can streamline workflows. Add a bias detection feature if you want to stand out even more — and establish a legitimate side hustle, just like 36% of Americans already have. And with machine learning, your opportunities for scaling are basically infinite. Dataset to get started: Updated Resume Dataset 3. Personalized Learning Recommender Education technology (EdTech) is one of the fastest-growing industries, and recommendation systems drive much of that innovation. A personalized learning recommender uses a combination of user profiling, content-based filtering, and collaborative filtering to suggest courses or learning materials tailored to individual preferences. Building this kind of system forces you to work with sparse matrices and similarity metrics, which deepens your understanding of recommendation algorithms. You can use public education datasets like those from Coursera or Khan Academy to start. To make it portfolio-ready, include user interaction tracking and explainability features — such as why a course was recommended. Recruiters love seeing interpretable AI, especially in human-centered applications like education. Dataset to get started: KDD Cup 2015 4. Real-Time Traffic Flow Prediction Urban AI is one of the hottest emerging fields, and traffic prediction sits right at its core. This project challenges you to process live or historical data to forecast congestion levels. It’s ideal for showing off your data streaming and time-series modeling skills. You can experiment with architectures like Graph Neural Networks (GNNs), which model city roads as interconnected nodes. Alternatively, CNN–LSTM hybrids perform well when you need to capture both spatial and temporal patterns. Make sure to highlight your deployment pipeline if you host your model in a cloud environment or stream data from APIs like Google Maps. That level of technical maturity separates beginners from engineers who can deliver end-to-end solutions. Dataset to get started: METR-LA (traffic sensor time series) 5. Deepfake Detection System As AI-generated media becomes more sophisticated, deepfake detection has turned into an urgent global concern. Building a classifier that distinguishes between authentic and manipulated images or videos not only strengthens your computer vision skills but also shows that you’re aware of AI’s ethical dimensions. You can start by using publicly available datasets like FaceForensics++ and experiment with convolutional neural networks (CNNs) or transformer-based models. The biggest challenge will be generalization — training a model that works across unseen data and different manipulation techniques. This project shines because it combines technical and moral responsibility. A well-documented notebook that discusses false positives and potential misuse makes you stand out as someone who doesn’t just build AI but understands its implications. Dataset to get started: Deepfake Detection Challenge (DFDC) 6. Multimodal Sentiment Analysis Most sentiment analysis projects focus on text, but modern applications demand more. Think of a model that can analyze speech tone, facial expressions, and text simultaneously. That’s where multimodal learning comes in. It’s complex, fascinating, and instantly eye-catching on a resume. You’ll likely combine CNNs for visual data, recurrent neural networks (RNNs) or transformers for textual data, and maybe even spectrogram analysis for audio. The integration challenge — making all these modalities talk to each other — is what really showcases your skill. If you want to polish the project for recruiters, create a simple web interface where users can upload a short video and see the detected sentiment in real time. That demonstrates deployment skills, user experience awareness, and creativity all at once. Dataset to get started: CMU-MOSEI 7. AI Agent for Financial Forecasting Finance has always been fertile ground for machine learning, and 2026 will be no different. Building an AI agent that learns to predict stock movements or cryptocurrency trends allows you to combine reinforcement learning with traditional forecasting techniques. You can start simple — training an agent using historical data and a reward system based on return rates. Then expand by incorporating real-time
OnePlus 15 Launched With Qualcomm Snapdragon 8 Elite Gen 5 Chipset; Check Display, Camera, Battery, Price And Other Features | Technology News
OnePlus 15 Launch: OnePlus has launched the OnePlus 15 smartphone in China, along with the OnePlus Ace 6. The smartphone succeeds last year’s OnePlus 13 and comes with several major upgrades. It features Android’s first “Touch Display Sync” technology, which greatly improves touch accuracy and stability for a faster, smoother, and more responsive experience. The company has also confirmed that the OnePlus 15 will be launched in other regions soon. The OnePlus 15 measures between 8.1 mm and 8.2 mm in thickness, depending on the color variant. In China, it is available in three color options: Sand Dune, Absolute Black, and Misty Purple. However, In India, Amazon has created a dedicated microsite for the smartphone, but the official launch date has not yet been announced. OnePlus 15 Specifications Add Zee News as a Preferred Source The OnePlus 15 features a 6.78 inch AMOLED display with a Full HD Plus resolution of 2772 by 1272 pixels. It offers a peak brightness of up to 1800 nits and a 120 Hz refresh rate, which can reach 165 Hz in certain situations. It is powered by the Qualcomm Snapdragon 8 Elite Gen 5 chipset and comes with two RAM options, 12 GB and 16 GB LPDDR5X, along with storage choices of 256 GB, 512 GB, or 1 TB using UFS 4.1 technology. It has a 7300 mAh battery that supports 120 W Super Flash Charge and 50 W wireless charging. For photography, the device includes a triple rear camera setup with a 50 MP wide lens, a 50 MP ultra wide lens, and a 50 MP telephoto lens. On the front, it has a 32 MP camera for selfies. The phone introduces a new Glacier Cooling System that uses an ultra thin hand tearable steel material, expanding the vapor cooling area by 43 percent and improving water absorption by 100 percent. It also includes various sensors such as proximity, ambient light, color temperature, electronic compass, accelerometer, gyroscope, hall, laser focus, spectrum, and an IR blaster. For security, the device features an in display ultrasonic fingerprint scanner and supports 5G, Wi Fi 7, NFC, Beidou, GPS, GLONASS, Galileo, and QZSS connectivity. (Also Read: OnePlus OxygenOS 16 Update With AI Integration: Check Full List Of Eligible Devices, Rollout Phase, And Key Features) OnePlus 15 Price The OnePlus 15 starts at CNY 3999, which is around Rs 50,000, for the base model with 12 GB RAM and 256 GB storage. Other variants are priced at CNY 4299, around Rs 53,000, for 16 GB RAM and 256 GB storage, CNY 4599, around Rs 57,000, for 12 GB RAM and 512 GB storage, and CNY 4899, around Rs 61,000, for 16 GB RAM and 512 GB storage. The top model with 16 GB RAM and 1 TB storage is priced at CNY 5399, approximately Rs 67,000. The OnePlus 15 will be available in three colours, Absolute Black, Misty Purple, and Sand Dune. Sales will begin in China today on October 28 through the company online store.
OpenAI Offers Free Access To ChatGPT Go For All Users In India For 1 Year From THIS Date | Technology News
OpenAI ChatGPT Go Access In India: OpenAI announced that it will offer one year of free access to ChatGPT Go for all users in India who sign up during a special promotional period starting November 4. The offer celebrates OpenAI’s first DevDay Exchange event in Bengaluru, which will also take place on the same day. ChatGPT Go is OpenAI’s new subscription plan that provides access to advanced features such as higher message limits, more image generation, longer memory, and the ability to upload extra files and images. All these features are powered by the latest GPT 5 model. OpenAI ChatGPT Go Access Introduced In India Add Zee News as a Preferred Source The plan was first introduced in India in August after users requested a more affordable way to use ChatGPT’s advanced tools. Within a month, the number of paid ChatGPT users in India more than doubled, prompting OpenAI to expand ChatGPT Go to about 90 countries worldwide. India is now ChatGPT’s second-largest and one of the fastest-growing markets, with millions of students, professionals, and developers using the tool daily to learn new skills, enhance creativity, and build innovative projects. The new offer reflects OpenAI’s continued “India-first” approach and supports the government’s IndiaAI Mission, which aims to expand access to artificial intelligence tools and encourage innovation across the country. OpenAI Working With Civil Society Groups OpenAI is also working with civil society groups, educational platforms, and government-led initiatives to make AI tools more accessible and inclusive. Existing ChatGPT Go subscribers in India will also be eligible for the free 12-month offer, with more details to be announced soon. Nick Turley, Vice President and Head of ChatGPT, said the company has been inspired by how Indian users are using ChatGPT Go. “Ahead of our first DevDay Exchange event in India, we’re making ChatGPT Go freely available for a year to help more people across India easily access and benefit from advanced AI. We’re excited to see the amazing things our users will build, learn, and achieve with these tools,” he said. (With IANS Inputs)
Apple iPhone 17e Likely To Make India Debut With Display Upgrade And A19 Chip; Check Leaked Specifications, Price And Other Features | Technology News
Apple iPhone 17e India Launch: Apple is expected to launch the iPhone 17e in the first quarter of 2026. The upcoming model will succeed the iPhone 16e, which is known for its strong performance, Apple Intelligence features, and affordable price. With the iPhone 17e, Apple is rumored to introduce a major design and display upgrade that could attract many users to switch. As per reports, the iPhone 17e may feature a Dynamic Island instead of the traditional display notch. This change could give the phone a modern look and improve the overall user experience compared to older models. Apple iPhone 17e: What Is Dynamic Island Add Zee News as a Preferred Source It is a pill-shaped interactive area at the top of the screen that displays ongoing activities like calls, music, navigation, and alerts. It also houses the front camera and Face ID sensors. Apple first introduced this feature with the iPhone 14 Pro and iPhone 14 Pro Max in 2022, later bringing it to the iPhone 15, 16, and now the 17 series. If the reports are true, the iPhone 17e will be the first affordable model to include this premium design feature. (Also Read: OnePlus 15 Launched With Qualcomm Snapdragon 8 Elite Gen 5 Chipset; Check Display, Camera, Battery, Price And Other Features) iPhone 17e Specifications (Leaked) According to a leaks from Digital Chat Station, Apple is reportedly planning to bring the Dynamic Island feature to the upcoming iPhone 17e. The device is expected to be powered by the A19 chip, though it may include slightly modified cores to align with its pricing strategy. Despite the addition of Dynamic Island, the iPhone 17e will likely retain a 6.1-inch OLED display with a 60Hz refresh rate, unlike the flagship iPhone 17 models, which feature a smoother 120Hz ProMotion display. Apple iPhone 17e India Launch And Price (Expected) The upcoming iPhone 17e is expected to be priced similarly to the iPhone 16e. Early leaks suggest that the iPhone 17e could launch in India with a starting price of around Rs 59,900. It is expected to launch in February 2026 around a year after the iPhone 16e.
Two AIs – Artificial Intelligence And Aspirational Indian Powering India Today: Bansuri Swaraj At TiEcon Delhi 2025 | Technology News
With the Narendra Modi government focusing on entrepreneurship, the country already has such an ecosystem in place that fosters innovation. Lok Sabha MP Bansuri Swaraj on Thursday said that India today is powered by two AIs and when the two meet, it accelerates the progress of the country. Speaking during TiEcon Delhi 2025, the BJP MP affirmed her faith in women-led development, saying that under Digital India, technology has become a tool for public good. “India today is powered by two AIs- Artificial Intelligence and the Aspirational Indian. When the two meet, they accelerate progress. As we enter the decade of deeptech, women must be at the forefront because if we leave out half of our population, we are not building artificial intelligence, we are risking artificial ignorance. Women who were once silent engines of progress are now becoming focal visionaries in technology, and that shift is transforming India’s story. Under the Digital India vision of Prime Minister Narendra Modi, technology has become a tool for public good, empowering talent across the nation and ensuring equitable access for women,” said Swaraj, after unveiling the ‘Wired for Impact: Women in AI’ report by Kalaari. The report recognizes and applauds the achievements of women leaders shaping India’s AI landscape. With over 2000 delegates, TiEcon Delhi 2025 affirmed its position as one of the country’s leading deeptech summit while shining a powerful spotlight on women-led innovation, AI inclusion, and financial leadership. The Wired for Impact report reveals that while women currently make up only one in five professionals in India’s technology workforce, this number is projected to grow nearly fourfold by 2027, with over 3.3 lakh women expected to hold AI roles. The report also found that AI/ML has emerged as the most preferred career track for women in technology, with 41% choosing it over other domains, a figure that even surpasses their male counterparts at 37%. Add Zee News as a Preferred Source TiEcon Delhi 2025 brought together policymakers, investors, and founders on one platform, creating a powerful collective voice in support of India’s entrepreneurial growth. “We are gratified about the participation from corporates and in particular, key decision makers across the government department. Our startup pitching sessions highlighted breakthrough ideas and the investor community’s enthusiasm reaffirmed the immense potential that lies ahead for India’s innovation economy,” said Geetika Dayal, Director General, TiE Delhi-NCR. Speaking at the conference Vani Kola, MD, Kalaari Capital said, “Innovation reaches its full potential only when it reflects the diversity of those it serves. In India, women continue to be underrepresented in technology, especially in roles that require advanced technical skills or leadership. With AI specifically, underrepresentation doesn’t just limit participation; it limits perspective and, ultimately, impact. When the systems we build learn and reason from a narrow or biased worldview, they risk encoding those same limitations into the intelligence that shapes our future.” Experts noted that if India is to build better and more trustworthy AI for the world, diversity must be treated as a mission-critical KPI.