Aug , 2022, Volume : 3 Article : 5

Role of Artificial Intelligence towards Future Farming and Transforming the Agriculture

Author : Sunil Kumar, Lalit Kumar, Jairam Choudhary, P.C Ghasal, Ekta Narwal, A.L. Meena

ABSTRACT

Presently, Artificial Intelligence technology (AI) is playing a big role in our daily life. The technique is capable of completing dedicated tasks with intelligence by using a machine that can mimic human behaviour.  To use this technique in agriculture, there is a need to develop an Artificial Intelligent model like an application, system, robot or machine etc. that can leverage the power of this advanced technology to boost agricultural productivity. The developed machines can be utilized effectively to analyze the soil before sowing the seeds, monitor the crop health vis a vismaturity level, monitor and control the pesticides doses and durations, forecast the weather conditions and predict other natural calamities to minimize the losses from such threats. Artificial Intelligent powered solutions will not only enable farmers to improve efficiencies but will also improve the quantity, quality and ensure a faster go-to-market for crops. Starting from 1940 to till date lot of efforts being done by various workers and organizations to develop and evolve this technology. The emergences of intelligent Facebook, Twitter, and Netflix between the periods 2000-2010 provide a breakthrough in this technique, thereby technique is being utilized in all aspects of life.

Cite this article:

 

Kumar, S., Kumar, L., Choudhary, J., Ghasal, PC., Narwal, E.  and Meena, A.L. (2022). Role of Artificial Intelligence towards Future Farming and Transforming the Agriculture. Food and Scientific Reports, 3(8):19-24.

1.    Introduction

The use of Artificial Intelligence (AI) technology offers us multiple options to integrate this technique into the farming system. AI can be defined as “the ability of machines or computer-controlled robots to perform the task that is associated with intelligence." So, AI is a subject of computer science, which aims to develop intelligent machines that can mimic human behaviour. Based on capabilities, AI can be divided into three types (1) Narrow AI: It is capable of completing dedicated tasks with intelligence (2) General AI: known as Artificial General Intelligence (AGI) –  machine that can show human intelligence and (3) Super AI: refers to self-aware AI with cognitive abilities that surpass that of humans. It is a level where machines can do any task that a human can do with cognitive properties. The current stage of AI is narrow AI.

      Actually, to begin with this technique in agriculture, there is a need to develop an Artificial Intelligent model like application, system, robot or machines etc. that can leverage the power of this advanced technology to boost agricultural productivity. Some of the general challenges that exist in the agricultural domain are:

  1. In farming, climatic factors such as rainfall, temperature and humidity play a big role. Increasing deforestation and pollution result in climatic changes, so, it’s difficult for farmers to take decisions to prepare the soil, sow seeds, and harvest.
  2. Every crop requires specific nutrition in the soil. There are 3 main nutrients i.e., nitrogen (N), phosphorous (P) and potassium (K) which are required in soil. The deficiency of nutrients can lead to poor quality of crops.
  3.   The protection of the crops from weed infestation is important, butcostly affair. If not timely controlled, it can lead to an increase in production cost and also it absorbs nutrients from the soil which can cause nutrition deficiency in the soil.

        In agriculture, presently the technology is playing a big role in analyzing the soil before sowing the seeds, monitoring the crop health vis a vismaturity level,  to monitor and control of the pesticides doses and durations, forecasting the weather conditions and predicting other natural calamities to minimize the loses from such threats. Artificial Intelligent systems are helping to improve the overall harvest quality and accuracy and also help in detecting diseases in plants, pests as well as the poor nutrition of the farms.

Though intelligent robots and artificial intelligence first appeared in the ancient Greek myths of Antiquity but the history of artificial intelligence as we think of it today spans less than a century.  Aristotle`s development of syllogism and its use in deductive reasoning is considered a key moment in mankind`s quest to understand its intelligence. The following is a quick look at some of the most important events in AI (Stuart Russell and Peter Norvig 2003).

Maturation of Artificial Intelligence (1940-1950)

A logical Calculus of Ideas on Immanent in Nervous Activity was first published by Warren McCullough and Walter Pitts in 1943. The paper proposed the first mathematical model for building a neural network. Thereafter, Donald Hebb (1949) demonstrated an updating rule for modifying the connection strength between neurons and the rule is now called Hebbian learning. In 1950, Alan Turing published a paper entitled "Computing Machinery and Intelligence” that described a method for determining the intelligent efficiency of a machine  which is now known as the Turing Test. Harvard Marvin Minsky and Dean Edmonds (1950) built SNARC, the first neural network computer.  Claude Shannon in 1950 published a paper on Programming a Computer for Playing Chess whereas Isaac Asimov (1950) published the Three Laws of Robotics.  

The birth of Artificial Intelligence (1950-1960)

To play checkers, Arthur Samuel developed a self-learning program in 1952. The Georgetown-IBM machine which is a translating experiment that can automatically translate 60 carefully selected Russian sentences into the English language was developed by Georgetown-IBM (1954).  Allen Newell and Herbert A. Simon (1955) created the "first artificial intelligence program" which was named "Logic Theorist". This program proved 38 of 52 Mathematics theorems, and found new and more elegant proofs for some theorems. In 1956, John McCarthy first time developed the word "Artificial Intelligence" which was initially adopted by American Computer scientist John McCarthy at the Dartmouth Conference. This was the time when the term AI was coined in the academic field. The same author in 1958 developed the AI programming language i.e. Lisp and published a paper entitled “Programs with Common Sense." The paper proposed the hypothetical Advice Taker, a complete AI system with the ability to learn from experiences as effectively as humans do.  In 1959, a GPS (General Problem Solver) program was designed to imitate human problem-solving by Allen Newell, Herbert Simon and J.C. Shaw. Herbert Gelernter in the year of 1959 developed the Geometry Theorem Prover (GTP) program whereas the term machine learning while at IBM was coined by Arthur Samuel in 1959. In the same year, John McCarthy and Marvin Minsky received the MIT Artificial Intelligence Project.

The golden years-Early enthusiasm (1960-1970)

In the year of 1963, John McCarthy started AI Lab at Stanford. Joseph Weizenbaum (1966) emphasized on developing algorithms to solve mathematical problems. Joseph Weizenbaum created the first chatbot in 1966, which was named as ELIZA. The first successful expert system was developed in DENDRAL by Stanford in 1969. It is a XX program and MYCIN is designed to be adopted to diagnose blood infections. 

The first AI winter (1970-1980)

PROLOG a logic programming language was created in 1972. The "Lighthill Report," which deals with the disappointments in AI research, was released by the British government that leads to severe cuts in funding for artificial intelligence projects (Lighthill 1973). Between the years 1974-1980, major DARPA cutbacks in academic grants occurred, thereby it leads to imposed frustration with the progress of AI development. Due to the combined effect of the earlier ALPAC report and the previous year`s "Lighthill Report," artificial intelligence funding stopped completely and research in this field was hampered. This period is known as the "First AI Winter." During AI winters, interest in the publicity of artificial intelligence was decreased (Colby, Kenneth M. 1974).

 A boom in AI (1980-1990)

John Hopfield and David Rumelhart (1980) developed the first successful commercial expert system known as Digital Equipment Corporations i.e. R1 (also known as XCON). It was designed to configure orders for new computer systems. R1 kicks off an investment boom in expert systems that will last for a decade and effectively ended the first "AI Winter." John Hopfield and David Rumelhart popularized the “deep learning” technique to allow computers to learn using experiences. In the year of 1982, Japan’s Ministry of International Trade and Industry launched the ambitious Fifth Generation Computer Systems (FGCS) project. The goal of FGCS is to develop supercomputer-like performance and a platform for AI development. The U.S. government in 1983 launched the Strategic Computing Initiative in response to Japan`s FGCS to provide DARPA-funded research in advanced computing and artificial intelligence. Lisp Companies in 1985 started spending huge money i.e. more than a billion dollars a year on expert systems and AI industry development which is known as the Lisp machine market. Few more companies such as Symbolics and Lisp Machines Inc. built specialized computers to run on the AI programming language Lisp (Daniel Bobrow et al 1986).

Bust: the second AI winter: By the year of 1987-1993 computing technology improved effectively, therefore, cheaper alternatives  were also emerged and as a result, the Lisp machine market collapsed. Again Investors and the government stopped funding AI research due to high costs but not efficient results.

The emergence of intelligent agents (1990-2000)

DART: In the year 1991, U.S. forces deployed DART during the Gulf War. It is an automated logistics planning and scheduling tool. In 1992, Japan terminated the FGCS project by citing its failure in meeting the ambitious goals outlined a decade earlier. In 1993, DARPA ends the Strategic Computing Initiative after spending nearly $1 billion and falling far short of expectations whereas IBM`s Deep Blue (1997) beat world chess champion, Gary Kasparov thereby becoming the first computer to beat a world chess champion (Nils J. Nilsson 1998).

Emergence of intelligent Facebook, Twitter, and Netflix (2000-2010)

In the year 2002, the first time AI entered homes in form of Roomba – a vacuum cleaner. A self-driving car known as STANLEY that wins the DARPA Grand Challenge was developed in 2005. In the same year, the U.S. military begins investing in autonomous robots like Boston Dynamics` "Big Dog" and iRobot`s "PackBot." By the year 2006,  Companies like Facebook, Twitter, and Netflix also started using AI and this was also the time when the utilization of the technique begin in the Business world. In 2008, iPhone app was developed by Google which makes a breakthrough in speech recognition and introduces the feature in its iPhone app.

Deep learning, big data and artificial general intelligence (2010-2021)

In 2011, by utilizing IBM`s Watson, Watson won jeopardy, a quiz show, where it had to solve complex questions as well as riddles. Watson had proved that it could understand natural language and can solve tricky questions quickly. In 2011, the iOS operating system (an AI-powered virtual assistant system) was released by Apple Siri.  Andrew Ng, the founder of the Google Brain Deep Learning project in 2012, feed a neural network by using deep learning algorithms of 10 million YouTube videos as a training set. The neural network was set to recognize a cat without being told what a cat is providing a breakthrough for neural networks and deep learning funding. The first self-driving car to pass a state driving test was developed by Google in 2014. In the same year Amazon Alexa, a virtual home was released by Amazon`s Alexa. Google Deep Mind`s Alpha Go which defeated a world champion Go player Lee Sedol was developed in 2016. Hanson Robotics (2016) the first "robot citizen", a humanoid robot named Sophia which was found capable of facial recognition, verbal communication and facial expression was created by Hanson Robotics. In 2018, Google has demonstrated an AI program "Duplex" which was a virtual assistant being utilized to take hairdresser appointments on call while the lady on the other side didn`t notice that she was talking with the machine. Waymo in 2018 launched its Waymo One service, allowing users throughout the Phoenix metropolitan area to request a pick-up from one of the company`s self-driving vehicles. Baidu in 2020 released its Linear Fold AI algorithm to scientific and medical teams engaged in developing a vaccine during the early stages of the SARS-CoV-2 pandemic. The algorithm is able to predict the RNA sequence of the virus in just 27 seconds and proved 120 times faster than other methods. In 2021 Dallin Baumbach the grittiest of organizations will push AI to new boondocks, for example, holographic meetings for telecommunication and on-demand, personalized manufacturing. They will gamify vital planning, incorporate simulations in the meeting room and move into intelligent edge experiences.

2.    Application of AI in Agriculture:

Technologies not only redefined farming over the years but the technological advances have affected the agriculture sector in more ways than one. Artificial Intelligent powered solutions will not only enable farmers to improve efficiencies but will also improve quantity, quality and ensure faster go-to-market for crops. Since agriculture is the main occupation in many countries worldwide and with the rising population as per UN projections will increase from 7.5 billion to 9.7 billion by 20501 therefore, there will be more pressure on land and farmers will have to grow more with less area. As per this population projection, food production will have to increase by 60% to feed an additional two billion people. However, traditional methods are not enough to handle this huge demand. This is driving farmers and agro companies to find newer ways to increase production and reduce waste. As a result, Artificial Intelligence (AI) is steadily emerging as part of the agriculture sector`s technological evolution.  In agriculture, Artificial Intelligence can be used to identify defects and nutrient deficiencies in the soil. This can effectively be done by using computer vision, robotics, and machine learning applications. AI can analyze where weeds are growing. AI bots can help to harvest crops at a higher volume and faster pace than human labourers. Some of the important applications of AI in agriculture are given below.

3.    AI Model for Agriculture

AI-based models like autonomous tractors, robots, drones and weed controlling machines or other similar devices can be designed and developed for their successful use in various agricultural operations. All such devices will help in improving agricultural productivity at a large scale (Dutta et al., 2020). AI in agriculture can be implemented via adopting two options first by the use of already developed AI-based models like robots or drones to analyze crop health and secondly you can develop such machines with the help of machine learning engineers and data scientists. The developed machines can be implemented successfully in the following agricultural operations:

i.                     Preparation of soil: It is the initial stage of farming where farmers prepare the soil for sowing seeds. This process involves breaking large soil clumps and removing debris, such as sticks, rocks, and roots. Also, the application of  fertilizers and organic matter in the soil depends on the type of crop to create an ideal situation for crops. The operations can be completed more efficiently by developed AI machines. 

ii.                    Sowing of seeds: This stage requires taking care of the distance between two seeds, and the depth for planting seeds. At this stage, climatic conditions such as temperature, humidity, and rainfall play an important role. Some of the machines were developed for accurate line sowing and appropriate disposal of seeds.

iii.                  Adding Fertilizers: The maintenance of  soil fertility is an important factor so that the farmers can continue to grow nutritious and healthy crops. Farmers turn to fertilizers because these substances contain plant nutrients such as nitrogen, phosphorus, and potassium. Fertilizers are simply planted nutrients applied to agricultural fields to supplement the required elements found naturally in the soil. This stage also determines the quality of the crop.

iv.                  Irrigation: This stage helps to keep the soil moist and maintain humidity. Under-watering or overwatering can hamper the growth of crops and if not done properly, it can lead to damaged crops.

v.                   Weed protection: Weeds are unwanted plants that grow near crops or at the boundary of farms. Weed protection is importantfactor in as weed decreases yields, increases production cost, interferes with harvest, and lowers crop quality.

vi.                  Harvesting: It is the process of gathering ripe crops from the fields. It requires a lot of labourers, therefore, is a labour-intensive activity. This stage also includes post-harvest handling such as cleaning, sorting, packing, and cooling.

vii.                Storage: This phase of the post-harvest system during which the products are kept in such a way as to guarantee food security other than during crop growth periods in  agriculture. It also includes packing and transportation of crop yield.

4.    The future scope of Artificial Intelligence in India

The immense potential of  AI can be understood by the accumulation of various other technologies viz., self-improvement algorithms, machine learning, pattern recognition, big data, and many more in its. Though the adoption of Artificial Intelligence in India is promising, however, it is still in its early stages. While some industries, such as IT, manufacturing, automobiles, etc., are taking advantage of AI, there are still many areas in which its potential has not been explored. In the coming years, it is predicted that hardly any industry will be left out untouched by this powerful tool. The future of Artificial Intelligence is bright in India as many organizations are opting for AI automation. The scope of Artificial Intelligence is limited to domestic and commercial purposes but the medical and aviation sectors are using AI techniques to improve their services and reduced cost.  Recently automation in operational vehicles has created a buzz in the logistics industry as it is expected that automated trucks/vehicles may soon be developed and used. In future, owing to the bright scope of Artificial Intelligence in various fields, the number of start-ups related to AI are also expected to increase in the coming years. Hence, in the coming years, AI would continue to act as a technological innovator and become a reality from fantasy. Machines that help humans with intelligence are not just in sci-fi movies but also in the real world. Nowadays, we are living in a world of Artificial Intelligence that remained just a story for some years back.

5.    Conclusion

Undoubtedly, Artificial Intelligence (AI) is a revolutionary field of computer science and is ready to become the main component of various emerging technologies like big data, robotics, and IoT. Knowingly or unknowingly, we are using AI technology in our daily lives and in some cases, it has become an essential part of our life. Ranging from Alexa/Siri to Chatbots, everyone is carrying AI in their daily routine. The development and evolution of this technology are happening at a rapid pace but it was not as smooth and easy as it seemed to us. The technique has taken several years and lots of hard work & contributions of various people to develop to this stage. Apart from its several beneficial aspects, this revolutionary technology also deals with many controversies about its future and impact on Human beings. Recently, businesses have seen a massive exposure to AI and ML phenomena as they explore their application possibilities in various fields. Businesses are already on the patch to create more robust virtual work environments, which has increased the demand for AI and ML professionals. The technique can also be opted for various agricultural operations such as analyzing the soil before sowing the seeds, monitoring the crop health vis a vis maturity level, monitoring and control of the pests using pesticides doses and durations, forecasting the weather conditions and predicting the other natural calamities to minimize the loses from such threats. In the medical segment, the technique is being utilized to track the spread of viruses, contact tracing, and even analytics for a treatment.

 

References

"`The Game is Over`: Google`s DeepMind says it is on verge of achieving human-level AI". The Independent. 23 May 2022.

Colby, Kenneth M. (1974) Ten Criticisms of Parry (PDF), Stanford Artificial Intelligence Laboratory, REPORT NO. STAN-CS-74-457, retrieved 17 June 2018.

Daniel Bobrow et al, (1986). "Expert Systems: Perils and Promise", 29(9), 880-894.

Deep Blue ran at 11.38 gigaflops (and this does not even take into account Deep Blue`s special-purpose hardware for chess). Very approximately, these differ by a factor of 10.7.

 McCarthy et al. (1955). Also see Crevier 1993, p. 48

Nils J. Nilsson,(1998). Artificial Intelligence: A New Synthesis, Morgan Kaufmann Publishers, -- another fine introductory textbook on artificial intelligence.

Dutta, S., Rakshit, S., & Chatterjee, D. (2020). Use of Artificial Intelligence in Indian Agriculture. Food and Sci. Rep., 1(4), 65-72. 


COMMENTS
  1. N/A
LEAVE A COMMENT
Re-generate