Research Article

GenAI copilot as an “innovation operating system” controls, learning loops, and integration prerequisites

by  Basil Obute, Kingsley C. Ugwu, Nzeribe A. Okeh
journal cover
International Journal of Applied Information Systems
Foundation of Computer Science (FCS), NY, USA
Volume 13 - Issue 2
Published: May 2026
Authors: Basil Obute, Kingsley C. Ugwu, Nzeribe A. Okeh
10.5120/ijaisfa260523eed8
PDF

Basil Obute, Kingsley C. Ugwu, Nzeribe A. Okeh . GenAI copilot as an “innovation operating system” controls, learning loops, and integration prerequisites. International Journal of Applied Information Systems. 13, 2 (May 2026), 79-94. DOI=10.5120/ijaisfa260523eed8

                        @article{ 10.5120/ijaisfa260523eed8,
                        author  = { Basil Obute,Kingsley C. Ugwu,Nzeribe A. Okeh },
                        title   = { GenAI copilot as an “innovation operating system” controls, learning loops, and integration prerequisites },
                        journal = { International Journal of Applied Information Systems },
                        year    = { 2026 },
                        volume  = { 13 },
                        number  = { 2 },
                        pages   = { 79-94 },
                        doi     = { 10.5120/ijaisfa260523eed8 },
                        publisher = { Foundation of Computer Science (FCS), NY, USA }
                        }
                        %0 Journal Article
                        %D 2026
                        %A Basil Obute
                        %A Kingsley C. Ugwu
                        %A Nzeribe A. Okeh
                        %T GenAI copilot as an “innovation operating system” controls, learning loops, and integration prerequisites%T 
                        %J International Journal of Applied Information Systems
                        %V 13
                        %N 2
                        %P 79-94
                        %R 10.5120/ijaisfa260523eed8
                        %I Foundation of Computer Science (FCS), NY, USA
Abstract

Enterprise GenAI copilot programs most commonly fail not because of poor model capabilities, but because businesses lack the necessary operating system for integrating and managing the key elements of a GenAI copilot, including: (i) data lineage and data retrieval provenance, (ii) tool integration and access control, (iii) governance-as-code (i.e. the ability to define and manage business rules through code), (iv) end-to-end traceability and approval processes, (v) learning loops (the ability to utilize and measure user activity and incidents as a means of improving the overall capability of GenAI). Drawing on sociotechnical systems, innovation systems, and Responsible AI research, we synthesize these into a 5-layer Innovation Operating System (IOS) and propose five falsifiable propositions (P1–P5) examining how IOS maturity, governance density, and learning loop maturity affect enterprise GenAI copilot performance. The study provides a reference implementation measured by: (a) IOS layer maturity, (b) a task-class governance density index, and (c) three performance proxies - Innovation Adoption Rate, Control Incident Frequency, and Retrieval Robustness Score. A replication package for this study includes a blueprint for all elements (schemas, queries, rubrics, notebooks, and a synthetic log generator).

References
  • Argyris, C. and Schön, D. A. (1978). Organizational Learning: A Theory of Action Perspective. Addison-Wesley.
  • Baxter, G. and Sommerville, the author. (2011). Sociotechnical systems: From design methods to systems engineering. Interacting with Computers, 23(1), 4–17. https://doi.org/10.1016/j.intcom.2010.07.003
  • Brynjolfsson, E., Li, D., and Raymond, L. R. (2023). Generative AI at work. NBER Working Paper No. 31161. National Bureau of Economic Research. https://doi.org/10.3386/w31161
  • Christiano, P., Leike, J., Brown, T. B., Martic, M., Legg, S., and Amodei, D. (2017). Deep reinforcement learning from human preferences. Advances in Neural Information Processing Systems, 30. https://doi.org/10.48550/arXiv.1706.03741
  • Cihon, P., Schuett, J., and Hadfield-Menell, D. (2021). Corporate governance of AI: A research agenda. In Proceedings of AAAI/ACM Conference on AI, Ethics, and Society (pp. 54–60). ACM. https://doi.org/10.1145/3461702.3462527
  • Cohen, W. M. and Levinthal, D. A. (1990). Absorptive capacity: A new perspective on learning and innovation. Administrative Science Quarterly, 35(1), 128–152. https://doi.org/10.2307/2393553
  • Dell'Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., and Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality. Harvard Business School Working Paper 24-013. https://doi.org/10.2139/ssrn.4573321
  • Edmondson, A. C. (1999). Psychological safety and learning behavior in work teams. Administrative Science Quarterly, 44(2), 350–383. https://doi.org/10.2307/2666999
  • Edquist, C. (2005). Systems of innovation: Perspectives and challenges. In J. Fagerberg, D. C. Mowery, and R. R. Nelson (Eds.), The Oxford Handbook of Innovation (pp. 181–208). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199286805.003.0007
  • Eisenhardt, K. M. (1989). Building theories from case study research. Academy of Management Review, 14(4), 532–550. https://doi.org/10.2307/258557
  • Eisenhardt, K. M. and Martin, J. A. (2000). Dynamic capabilities: What are they? Strategic Management Journal, 21(10–11), 1105–1121. https://doi.org/10.1002/smj.133
  • European Commission. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council (AI Act). Official Journal of the European Union.
  • Flatten, T. C., Engelen, A., Zahra, S. A., and Brettel, M. (2011). A measure of absorptive capacity: Scale development and validation. European Management Journal, 29(2), 98–116. https://doi.org/10.1016/j.emj.2010.11.002
  • Floridi, L., Cowls, J., Beltrametti, M., et al. (2018). An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689–707. https://doi.org/10.1007/s11023-018-9482-5
  • Grant, R. M. (1996). Toward a knowledge-based theory of the firm. Strategic Management Journal, 17(S2), 109–122. https://doi.org/10.1002/smj.4250171110
  • Halevy, A., Korn, F., Noy, N. F., Olston, C., Polyzotis, N., Roy, S., and Whang, S. E. (2016). Goods: Organizing Google's datasets. In Proceedings of SIGMOD 2016. ACM. https://doi.org/10.1145/2882903.2903730
  • Hsieh, H.-F. and Shannon, S. E. (2005). Three approaches to qualitative content analysis. Qualitative Health Research, 15(9), 1277–1288. https://doi.org/10.1177/1049732305276687
  • Kogut, B. and Zander, U. (1992). Knowledge of the firm, combinative capabilities, and the replication of technology. Organization Science, 3(3), 383–397. https://doi.org/10.1287/orsc.3.3.383
  • Koo, T.K. and Li, M.Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine, 15(2), 155–163. https://doi.org/10.1016/j.jcm.2016.02.012
  • Krippendorff, K. (2004). Reliability in content analysis: Some common misconceptions and recommendations. Human Communication Research, 30(3), 411–433. https://doi.org/10.1093/hcr/30.3.411
  • Lemley, M. A. and Casey, B. (2021). Fair learning. Texas Law Review, 99(4), 743–784.
  • Lewis, P., Perez, E., Piktus, A., et al. (2020). Retrieval-augmented generation for knowledge-intensive NLP tasks. Advances in Neural Information Processing Systems, 33, 9459–9474. https://doi.org/10.48550/arXiv.2005.11401
  • Lundvall, B.-Å. (Ed.). (1992). National Systems of Innovation: Towards a Theory of Innovation and Interactive Learning. Pinter Publishers.
  • Madaio, M. A., Stark, L., Wortman Vaughan, J., and Wallach, H. (2022). Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. In Proceedings of CHI 2022. ACM. https://doi.org/10.1145/3313831.3376445
  • McKinsey Global Institute. (2023). The Economic Potential of Generative AI: The Next Productivity Frontier. McKinsey and Company.
  • Miles, M. B., Huberman, A. M., and Saldaña, J. (2020). Qualitative Data Analysis: A Methods Sourcebook (4th ed.). SAGE Publications.
  • Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., and Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data and Society, 3(2), 1–21. https://doi.org/10.1177/2053951716679679
  • Nelson, R. R. (Ed.). (1993). National Innovation Systems: A Comparative Analysis. Oxford University Press.
  • NIST. (2023). Artificial Intelligence Risk Management Framework (AI RMF 1.0). NIST AI 100-1. National Institute of Standards and Technology. https://doi.org/10.6028/NIST.AI.100-1
  • Noy, S. and Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. Science, 381(6654), 187–192. https://doi.org/10.1126/science.adh2586
  • Ouyang, L., Wu, J., Jiang, X., Almeida, D., et al. (2022). Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35, 27730–27744. https://doi.org/10.48550/arXiv.2203.02155
  • Perez, F. and Ribeiro, T. (2022). Ignore previous prompt: Attack techniques for language models. In Workshop on Trustworthy and Reliable Large-Scale Machine Learning Models, ICLR 2022. https://doi.org/10.48550/arXiv.2211.09527
  • Sag, M. (2023). Copyright safety for generative AI. Houston Law Review, 61(2), 295–366.
  • Schick, T., Dwivedi-Yu, J., Dessì, R., Raileanu, R., et al. (2023). Toolformer: Language models can teach themselves to use tools. Advances in Neural Information Processing Systems, 36. https://doi.org/10.48550/arXiv.2302.04761
  • Teece, D. J., Pisano, G., and Shuen, A. (1997). Dynamic capabilities and strategic management. Strategic Management Journal, 18(7), 509–533. https://doi.org/10.1002/smj.882
  • Trist, E. (1981). The sociotechnical perspective. In A. H. Van de Ven and W. F. Joyce (Eds.), Perspectives on Organization Design and Behavior (pp. 19–75). Wiley.
  • UNESCO. (2022). Recommendation on the Ethics of Artificial Intelligence. United Nations Educational, Scientific and Cultural Organization.
  • Weidinger, L., Mellor, J., Rauh, M., Griffin, C., et al. (2023). Taxonomy of risks posed by language models. In Proceedings of FAccT 2022. ACM. https://doi.org/10.1145/3531146.3533088
  • Yao, S., Zhao, J., Yu, D., Du, N., Shafran, the author., Narasimhan, K., and Cao, Y. (2023). ReAct: Synergizing reasoning and acting in language models. In Proceedings of ICLR 2023. https://doi.org/10.48550/arXiv.2210.03629.
  • Yin, R. K. (2018). Case Study Research and Applications: Design and Methods (6th ed.). SAGE Publications.
Index Terms
Computer Science
Information Sciences
No index terms available.
Keywords

Generative AI Copilot Information Systems Governance Data Lineage Traceability Evaluation Design Science Responsible AI

Powered by PhDFocusTM