Budaya Algoritmik: Bagaimana AI Membentuk Identitas Manusia dan Norma Sosial

Authors

  • Djufri Djufri Universitas Muhammadiya Berau

DOI:

https://doi.org/10.37329/ganaya.v8i2.4134

Keywords:

Artificial Intelligence, Social Identity, Echo Chamber, Social Polarization, Social Norms

Abstract

In the digital age, artificial intelligence (AI) has become a major force in shaping user experiences and social interactions in cyberspace. AI algorithms used in various digital platforms adjust content based on individual preferences, which indirectly shapes social identities and social norms within online communities. This research aims to examine how AI affects user interaction patterns, forms social identities, and strengthens or weakens social dynamics through mechanisms such as echo chambers and filter bubbles. This study uses a qualitative approach with an exploratory method, which involves semi-structured interviews with 20-30 participants from various backgrounds, as well as a quantitative survey of 300-500 respondents. The results show that AI plays a dual role in shaping social attachment, where 78% of respondents feel more connected to their digital community, but at the same time experience limited access to different perspectives due to algorithm personalization. Other findings suggest that AI algorithms contribute to increased social polarization by reinforcing boundaries between groups that have different views. In conclusion, while AI has the potential to create a more inclusive digital space, current algorithm implementations are more focused on extreme personalization, which narrows the openness to other perspectives. Therefore, more inclusive regulations and digital literacy education are needed so that users can be more critical of the information consumed and not be trapped in a narrow information cycle.

References

Alberici, A. I., & Milesi, P. (2016). Online Discussion, Politicized Identity, And Collective Action. Group Processes & Intergroup Relations, 19(1), 43-59.

Alvarado, O., & Waern, A. (2018). Towards Algorithmic Experience: Initial Efforts For Social Media Contexts. Proceedings Of The 2018 Chi Conference On Human Factors In Computing Systems, 1-12.

Areeb, Q. M., Nadeem, M., Sohail, S. S., Imam, R., Doctor, F., Himeur, Y., Hussain, A., & Amira, A. (2023). Filter Bubbles In Recommender Systems: Fact Or Fallacy A Systematic Review. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 13(6).

Bhutani, V., Bahadur, P. S., Sansaniwal, S. K., & Bais, P. (2024). Youth Studes In The AI Era. Hershey: IGI Global

Cho, H., Lee, D., & Lee, J. G. (2023). User Acceptance On Content Optimization Algorithms: Predicting Filter Bubbles In Conversational AI Services. Universal Access In The Information Society, 22(4), 1325-1338.

Cerro, C. C. D. (2024). The Power Of Social Networks And Social Media’s Filter Bubble In Shaping Polarisation: An Agent-Based Model. Applied Network Science, 9(1), 1-32.

Debnath, N., Peralta, M. G., Salgado, C. H., Baigorria, L., Riesco, D., & Montejano, G. (n.d.). Digital Transformation: A Quality Model As A Guide To Achieve A Digital Product Focused On People, 169-158.

Ekström, A. G., Madison, G., Olsson, E. J., & Tsapos, M. (2024). The Search Query Filter Bubble: Effect Of User Ideology On Political Leaning Of Search Results Through Query Selection. Information, Communication & Society, 27(5), 878-894.

Ezzahra, Y. F., Imane, E. K., Houssame, N., Kaoutar, E. A., Lahiala, M. A., & Hamid, H. (2024). Unveiling The Influence of AI-Social Fusion: Enhancing Consumer Community Engagement and Advertising Dynamics. 2024 Mediterranean Smart Cities Conference (MSCC), 1-6.

Geschke, D., Lorenz, J., & Holtz, P. (2019). The Triple‐Filter Bubble: Using Agent‐Based Modelling To Test A Meta‐Theoretical Framework For The Emergence Of Filter Bubbles And Echo Chambers. British Journal of Social Psychology, 58(1), 129-149.

Ibrahim, S. M., Alshraideh, M., Leiner, M., AlDajani, I. M., & Bettaz, O. (2024). Artificial Intelligence Ethics: Ethical Consideration And Regulations From Theory To Practice. IAES International Journal of Artificial Intelligence (IJ-AI), 13(3), 3703-3713.

Jeon, Y., Kim, J., Park, S., Ko, Y., Ryu, S., Kim, S. W., & Han, K. (2024). HearHere: Mitigating Echo Chambers in News Consumption through an AI-based Web System. Proceedings of the ACM on Human-Computer Interaction, 8(1), 1-34.

Kaluža, J. (2022). Habitual Generation of Filter Bubbles: Why is Algorithmic Personalisation Problematic For The Democratic Public Sphere?. Javnost-The Public, 29(3), 267-283.

Kleanthous, S., & Siklafidis, C. (2023). Perception Of Personalization Processes: Awareness, Data Sharing and Transparency. Proceedings of the 2nd International Conference of the ACM Greek SIGCHI Chapter, 1-5.

Lunardi, G. M., Machado, G. M., Maran, V., & Oliveira, J. P. M. D. (2020). A Metric For Filter Bubble Measurement In Recommender Algorithms Considering The News Domain. Applied Soft Computing, 97, 106771.

Makatov, Z. (2024). Digital Performativity: New Horizons Of Identity In The Algorithmic Era. 2024 International Conference on Engineering Management of Communication and Technology (EMCTECH), 1-5.

Montasari, R. (2024). The Dual Role of Artificial Intelligence in Online Disinformation: A Critical Analysis. Cyberspace, Cyberterrorism and the International Security in the Fourth Industrial Revolution: Threats, Assessment and Responses. Cham: Springer International Publishing.

Moriniello, F., Martí-Testón, A., Muñoz, A., Jasaui, D. S., Gracia, L., & Solanes, J. E. (2024). Exploring The Relationship Between The Coverage of AI in WIRED Magazine and Public Opinion Using Sentiment Analysis. Applied Sciences, 14(5), 2-20.

Onitiu, D. (2022). Fashion, Filter Bubbles And Echo Chambers: Questions Of Privacy, Identity, And Governance. Law, Innovation And Technology, 14(2), 395-420.

Park, K., & Yoon, H. Y. (2024). Beyond The Code: The Impact of AI Algorithm Transparency Signaling On User Trust And Relational Satisfaction. Public Relations Review, 50(5), 102507.

Ramos, J. D. S. (2019). Machines Among The Crowd: On The Political Effects Of Algorithmic Production Of Social Currents. Vibrant: Virtual Brazilian Anthropology, 16, 1-18.

Rhodes, S. C. (2022). Filter Bubbles, Echo Chambers, and Fake News: How Social Media Conditions Individuals To Be Less Critical of Political Misinformation. Political Communication, 39(1), 1-22.

Rodilosso, E. (2024). Filter Bubbles And The Unfeeling: How AI for Social Media Can Foster Extremism and Polarization. Philosophy & Technology, 37(2), 2-21.

Scalvini, M. (2023). Making Sense Of Responsibility: A Semio-Ethic Perspective On TikTok’s Algorithmic Pluralism. Social Media+Society, 9(2).

Sankaranarayanan, L. S. (2024). The Global AI Framework: Navigating Challenges And Societal Impacts. AI & SOCIETY, 1-2.

Sethy, A., Shaik, N., Yadavalli, P. K., & Anandaraj, S. P. (2023). 9 AI: Issues, Concerns, And Ethical Considerations. Toward Artificial General Intelligence: Deep Learning, Neural Networks, Generative AI, 189-211.

Shen, Z., Hu, Y., & Yin, Y. (2025). Algorithm‐Generated Identity Labeling Promotes Identity‐Consistent Product Preferences. Psychology & Marketing.

Tanprasert, T., Fels, S. S., Sinnamon, L., & Yoon, D. (2024, May). Debate Chatbots To Facilitate Critical Thinking On Youtube: Social Identity And Conversational Style Make A Difference. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems, 1-24.

Taylor, S. H., & Chen, Y. A. (2024). The Lonely Algorithm Problem: The Relationship Between Algorithmic Personalization And Social Connectedness On TikTok. Journal of Computer-Mediated Communication, 29(5).

Vendeville, A., Giovanidis, A., Papanastasiou, E., & Guedj, B. (2022). Opening Up Echo Chambers Via Optimal Content Recommendation. International Conference on Complex Networks and Their Applications. Cham: Springer International Publishing.

Zhang, J., & Zhang, Z. (2023). Ethics And Governance Of Trustworthy Medical Artificial Intelligence. BMC Medical Informatics and Decision Making, 23(1), 1-15.

Zhou, X., Zhou, Y., Gong, Y., Cai, Z., Qiu, A., Xiao, Q., Antle, A. N., & Bai, Z. (2024). “Bee and I need diversity!” Break Filter Bubbles In Recommendation Systems through Embodied AI Learning. Proceedings of the 23rd Annual ACM Interaction Design and Children Conference, 44-61.

Downloads

Published

19-03-2025

How to Cite

Djufri, D. (2025). Budaya Algoritmik: Bagaimana AI Membentuk Identitas Manusia dan Norma Sosial. Ganaya : Jurnal Ilmu Sosial Dan Humaniora, 8(2), 176–184. https://doi.org/10.37329/ganaya.v8i2.4134

Issue

Section

Articles