Artificial intelligence (AI) has its origins several decades ago, but in the last 2-3 years, AI had a huge boom in all the knowledge areas; mainly in finance, engineering, international banking, and few years ago in medicine field.
Since all these AI programs are created by human beings, it will no longer be possible to overtaken by human intelligence.
The study of AI has been a continuous study and endeavor of scientists and engineers for over 65 years. The simple contention is that human-created machines can do more than just labor-intensive work; they can develop human-like intelligence. Being aware or not, AI has penetrated our daily lives, playing novel roles in industry, healthcare, transportation, education, and many more areas that are close to the public1,2.
High-quality datasets built in typical and practical scenarios are encouraged to be shared in the research community. In this context, people devote efforts to designing and carrying out exhaustive experiments to collect real-world data, representing typical working conditions. In this process, the types of variables and how the data are measured constitute a part of the intellectual inputs by humans. As one of the key factors contributing to successful AI training, such experience has an unneglectable influence on the quality of the datasets in terms of comprehensiveness and typicality because it decides how well the data distribution of the constructed dataset can align with the ground truth3.
Internal medicine physicians are increasingly interacting with systems that implement AI and machine learning technologies. Some physicians and health care systems are even developing their own AI models, both within and outside of electronic health record systems4.
With growing availability of vast amounts of patient data and unprecedented levels of clinician burnout, the proliferation of these technologies is cautiously welcomed by some physicians. Others think that it presents challenges to the patient-physician relationship and the professional integrity of physicians4.
American College of Physicians (ACP) position paper describes the college’s foundational positions and recommendations regarding the use of AI and enabled tools and systems in the provision of healthcare4. The ACP calls for more research on the clinical and ethical implications of these technologies and their effects on patient health and well-being.
I believe that, in the medicine field, several guidelines and algorithms which are position papers from different medical colleges and societies for diagnosis and treating patients have worked somewhat to create AI, based on such procedures. In fact, to calculate BMI and cardiovascular risks for instance, the formulas with patient data are used to calculate disease risks.
Finally, in the field of medicine, AI will be very difficult to replace the doctor in performing a physical examination to the lungs, abdomen and neurologic, etc.
Funding
The authors declare that they have not received funding.
Conflicts of interest
The authors declare no conflicts of interest.
Ethical considerations
Protection of humans and animals. The authors declare that no experiments involving humans or animals were conducted for this research.
Confidentiality, informed consent, and ethical approval. The study does not involve patient personal data nor requires ethical approval. The SAGER guidelines do not apply.
Declaration on the use of artificial intelligence. The authors declare that no generative artificial intelligence was used in the writing of this manuscript.
