AI POLICY
Last Updated: 18/12/2025
This document is owned by Tradies Software Pty Ltd on behalf of its own and its group of companies, which are licensed to use the i4T Global suite of software platforms.
The companies are:
i4T Oceania Pty Ltd (Australia, AU)
i4T Sarl (Switzerland, CH)
i4T LLC (United States of America, USA)
The i4T Global suite of software platforms, publicly known as but not limited to, includes:
i4T Maintenance
i4T Business
i4Tradies
i4T CRM+
1. Our commitment to responsible AI
(a) i4T Global is committed to using Artificial Intelligence (AI) ethically, safely, and in compliance with all applicable standards. We recognize the benefits of AI for improving our services, but we prioritize fairness, transparency, privacy, and accountability in all AI-driven features. Humans remain at the center of oversight and decision-making for our AI systems – we ensure that automated processes are subject to human review, especially where they impact customers or partners. We will always honor our obligations regarding security, privacy, confidentiality, and ethics when implementing AI. This AI Policy outlines how we deploy and govern AI technologies across i4T Global platforms in a responsible manner.
(b) Scope: This AI Policy applies to all uses of AI across the i4T Global suite (i4T Maintenance, i4T Business, i4Tradies, i4T CRM+, and related platforms). It covers AI features integrated into our products (such as predictive analytics, workflow suggestions, and any AI-powered assistant features) as well as the use of third-party AI tools within our operations. It also provides guidelines for our employees and contractors in their use of AI. By using any i4T Global service or tool that incorporates AI, you consent to the practices outlined in this Policy.
(c) Definition of AI: For purposes of this Policy, “AI systems” refer to machine-based systems that make predictions, recommendations, or decisions influencing real or virtual environments (e.g. algorithms for scheduling optimization, generative content tools, lead scoring engines). This definition is consistent with internationally accepted definitions (such as the OECD’s) which describe AI as systems that infer how to achieve given objectives using data-driven inputs to generate outputs. We consider both internally developed AI models and any AI services provided by third parties under this scope.
2. Compliance with laws and ethical frameworks
(a) Australian standards: i4T Global adheres to Australia’s AI governance requirements and ethical guidelines. We follow the Australian Government’s AI Ethics Principles, which emphasize outcomes like fairness, privacy protection, transparency, and accountability.
We also comply with guidance from Australian regulators, including the Office of the Australian Information Commissioner (OAIC) and the Australian Competition & Consumer Commission (ACCC).
Our AI practices reflect the Australian AI Ethics Framework and principles published by the government to ensure AI is safe, secure, and reliable. For example, we strive to ensure our AI systems do not result in unfair discrimination and that they uphold data privacy and security at all times.
In line with regulatory expectations, humans oversee our AI-driven decisions and we make risk-based decisions about AI use to ensure it serves the community in a safe, responsible way.
(b) EU regulations: For our operations and users in the European Union, we are committed to full compliance with the EU General Data Protection Regulation (GDPR) in any AI-related data processing.
We also align with the emerging requirements of the EU Artificial Intelligence Act, the EU’s comprehensive framework for AI. This means we classify and manage our AI systems according to their risk level and implement the necessary safeguards for higher-risk AI applications.
We support the EU AI Act’s core objectives that AI systems be safe, transparent, traceable, and non-discriminatory, with appropriate human oversight. For instance, if our platform deploys AI that produces content or decisions, we ensure transparency about its AI nature and enable human intervention or review as needed.
We do not engage in any AI practices banned in the EU (such as social scoring or discriminatory profiling).
For generative AI features, we follow EU guidance on transparency – clearly informing users when content is AI-generated and taking steps to prevent outputs that violate laws or rights.
(c) United States guidelines: In the United States, i4T Global abides by all relevant laws and regulations enforced by agencies like the Federal Trade Commission (FTC). The FTC has made it clear that there is “no AI exemption to the laws on the books.”
We take this seriously: our use of AI will not be misleading or unfair to consumers. We ensure that any AI-driven claims or functionalities in our products are truthful and substantiated, and we will not use AI in ways that could “turbocharge” fraud or bias.
Additionally, we look to frameworks such as the NIST AI Risk Management Framework (RMF) for best practices on managing AI risks. The NIST AI RMF provides a voluntary, yet highly regarded, set of guidelines to incorporate “trustworthiness considerations into the design, development, use, and evaluation” of AI systems.
By aligning with NIST’s principles (which cover governance, mapping of risks, measuring and managing AI risks), we aim to cultivate trustworthy AI solutions that meet high standards of reliability and safety. In summary, across all regions – Australia, EU, US and beyond – we continuously monitor and comply with evolving AI laws, regulations, and ethical codes. This includes adhering to consumer protection laws, anti-discrimination laws, privacy laws, and industry-specific regulations that apply to AI.
3. Ethical AI principles we uphold
We have adopted a set of core ethical principles to guide all AI development and usage at i4T Global. These principles are in harmony with globally recognized AI ethics frameworks (such as Australia’s AI Ethics Principles and the OECD and EU guidelines) and ensure our AI is used for the benefit of our users and stakeholders:
Fairness: Our AI systems are designed and tested to be inclusive and to avoid unfair bias or discrimination. We actively work to prevent AI outcomes that could disadvantage individuals or groups based on attributes like age, gender, race, or other protected characteristics.
All decisions made or assisted by AI – such as job scheduling, lead prioritization, or risk scoring – should be impartial and equitable, in compliance with anti-discrimination laws and community expectations. We periodically review AI outputs for bias and take corrective action if biases are detected.
Transparency: We strive to make the role of AI transparent to our users. Whenever AI is significantly involved in generating content or making decisions, we will disclose this to the affected users in a clear manner. Users should be able to understand when they are interacting with or subject to an AI-driven process.
We provide explanations of how key AI features work at a high level, and we offer avenues for users to inquire about decisions. This commitment echoes the principle that people have a right to know when AI is influencing an outcome and to receive meaningful information about the logic involved. For example, if our CRM uses an AI model to suggest an email draft or a job assignment, we will indicate that it is AI-suggested and ensure the user can review and modify it.
Privacy: Protecting privacy is paramount in all AI initiatives. Any personal data used in AI algorithms is handled in accordance with our Privacy Policy and applicable privacy laws. We apply privacy-by-design, meaning we minimize data usage, anonymize or pseudonymize data where feasible, and secure data throughout the AI lifecycle. AI systems at i4T Global are built to respect and uphold privacy rights and data protection regulations. We do not feed personal or sensitive client data into AI models without proper legal basis and consent.
Moreover, we never use customer data to train public or non-i4T AI models without permission – for AI features that learn from user data, we confine that training to controlled environments or use anonymized data so individual privacy is preserved.
Accountability: We take accountability for the outcomes produced by our AI. i4T Global has identifiable team members responsible for each stage of an AI system’s lifecycle, from development and testing to deployment and monitoring.
If an AI system causes an unexpected result or error, we will investigate and address it with human oversight. Ultimately, human judgment is built into our AI governance, ensuring that AI remains a tool under human control, not an autonomous authority. We also enable a “human-in-the-loop” approach for critical processes: employees or users can review, override, or contest AI-driven results when appropriate. This means, for example, if our AI flags a transaction as fraudulent or suggests a business decision, there is a human review process in place.
We encourage feedback from users and employees if an AI output seems incorrect or problematic, and we will take responsibility in rectifying issues. Accountability also entails compliance with all legal obligations – as regulators have noted, claims of innovation are no excuse for lawbreaking in AI.
These ethical principles (fairness, transparency, privacy, and accountability) are embedded in our product design and corporate culture. They align with widely accepted AI ethics standards, and we continuously train our staff and update our systems to uphold these values.
We believe that adhering to these principles not only ensures compliance but also builds trust with our users and partners.
4. Use of AI in i4T Global services
(a) AI-driven features in our products: We may incorporate AI technologies to enhance user experience, efficiency, and decision-making within our platforms.
Whenever we deploy an AI-powered feature, our goal is to augment human capabilities, not replace them. AI is used to provide recommendations or automation that users can then review and approve. We ensure that no critical decisions are made by AI alone without human oversight – for instance, AI might prioritize a list of tasks, but a human manager can adjust or override those priorities based on context.
(b) Data usage and training: Our AI systems may learn from data to become smarter and more useful. In doing so, we follow strict data governance. If we use customer or operational data to train or improve an AI model, we will anonymize or aggregate it whenever possible to protect individual identities.
Any personal data that is processed by AI features is handled in line with consent provided and our privacy commitments. Importantly, we do not use your data to train third-party or public AI models.
We do not sell or transfer data to outside parties for AI development. In cases where our AI features rely on third-party AI services (such as an integrated AI translation or vision API), we ensure that only the minimum necessary data is sent and that it is protected via encryption and contractual agreements (see Section 5 below).
(c) Human control and user empowerment: All AI features are designed with user control in mind. Users are typically given the choice to use an AI recommendation or not. If an AI-driven alert or flag is generated, our users or our support team will manually verify it before any final action is taken.
We also provide the ability to opt-out of certain AI-driven functionalities if feasible. If at any point you are uncomfortable with an AI feature in our platform, you can contact us to discuss alternatives or to understand how you can disable or mitigate its use. We document the functionality of major AI features in our user guides for transparency, and we welcome user questions about how an AI arrived at a particular result.
(d) No automated decision-making without consent (for GDPR contexts): In jurisdictions like the EU where GDPR applies, if any AI system were to make a decision with legal or similarly significant effects on an individual, we would only do so in compliance with GDPR Article 22 – meaning we would either have your explicit consent or the decision would be necessary for a contract or authorized by law, and even then with suitable safeguards.
At present, i4T Global does not employ fully automated decision-making of that nature; most AI outputs we plan to introduce in our system will be advisory or assistive, with a human ultimately in the loop. We remain committed to keeping it that way unless proper safeguards and user rights are put in place.
5. Use of third-party AI tools and services
(a) Third-party AI integrations: In some cases, i4T Global may integrate or utilize third-party AI services to provide certain functionality. For example, our platform might use a third-party AI for speech-to-text transcription, map routing optimization, or credit card fraud detection, etc.
Whenever we select an external AI tool or vendor, we conduct due diligence to ensure they meet our ethical and security standards. We only partner with AI providers that have robust measures for permission, transparency, and fairness in how they operate.
In other words, just as The Guardian and other responsible organizations do, we choose third-party AI tools that have addressed content permission rights, are transparent about their algorithms, and ensure fair value and reward for any data or content usage involved.
All third-party AI services we use must comply with privacy laws (e.g., not using personal data we send for their own purposes beyond providing the service) and preferably align with frameworks like the EU AI Act and Australian AI Ethics Principles.
(b) Data sharing with third-party AI: If an AI integration requires sending data to an external service, we will disclose this in our product interface or documentation.
We minimize the data sent – using anonymization or partial data when possible – and we ensure such data transfers are secure (encrypted in transit) and bound by data processing agreements.
Third-party AI providers are not allowed to use i4T Global user data for training their general models unless it’s explicitly agreed and disclosed. Typically, our contracts with AI vendors stipulate that they act only as processors of our data for the specific function we need, and they must not retain or reuse the data for other purposes.
(c) Monitoring third-party tools: We continuously monitor the outputs and performance of third-party AI components in our system. If any integrated AI service produces unreliable or biased results, or if the provider changes their practices in a way that conflicts with our standards, we will intervene – this could mean tuning how we use the service, or discontinuing it if necessary.
We also stay updated on third-party AI tools’ compliance with new regulations. For example, if a cloud AI API we use is classified as a high-risk system under the EU AI Act, we will ensure that the provider has fulfilled the required conformity assessments, transparency, and risk management obligations.
(d) Content and IP rights: When using generative AI (whether built in-house or third-party) that involves content creation or processing, we respect intellectual property rights. We instruct our AI systems and employees not to ingest proprietary third-party content without permission. For instance, if we ever use an AI to summarize an industry article for a blog, we ensure we have rights to that content or we use only what is publicly permissible.
Similarly, outputs from AI that involve any third-party data are handled carefully to avoid IP infringements. As a rule, employees must not input confidential or copyrighted data into any AI tool that is not approved (see Section 6 for employee guidelines). This prevents unintended exposure of sensitive information and protects the rights of content owners.
6. Employee use of AI
(a) Internal AI use policy: i4T Global empowers its employees to leverage AI tools to improve productivity and innovation, but this must be done responsibly and in line with company policies. All staff are expected to follow these guidelines when using AI (including public tools like ChatGPT or any AI-based software) in their work:
Approved tools only: Employees should use only company-approved AI tools for handling any non-public business information. Public or consumer AI services (especially those in the cloud) may not have adequate privacy safeguards. No confidential or proprietary i4T Global data is to be entered into third-party AI tools that have not been vetted and approved by i4T Global. This includes customer data, code repositories, financial information, strategy documents, or any sensitive information.
Inputting such data into an external AI service could risk exposure to unauthorized parties. We have an internal list of permitted AI software, and our IT team reviews new AI tools through a security and privacy assessment before approval.
Data protection: Even with approved tools, employees should use the highest privacy settings. For example, if using an AI coding assistant, disable any setting that would share our code back to the vendor for training. Always assume that anything input into an AI could be seen by someone else, and act accordingly. When in doubt, consult our IT security team before using a tool for work purposes.
Accuracy and verification: Employees must remember that AI-generated output can be inaccurate, misleading, or completely fabricated at times. Therefore, any content or decision derived from an AI must be carefully reviewed by the employee. Do not rely on AI output as fact without verification. If an AI drafting tool writes an email or a report, the employee must proofread it for correctness, completeness, and tone. If an AI analytic tool provides an insight (e.g., trend analysis), it should be cross-checked against known data. We hold our staff accountable for content they publish or decisions they make using AI assistance – just as if they were doing it manually. Under no circumstance should an employee blindly forward AI-generated content to a customer or stakeholder without human review.
Ethical use: Employees should ensure that their use of AI aligns with our ethics and values. This means not using AI in ways that would violate any law, our Code of Conduct, or individual privacy. For example, generating deepfakes or using AI to profile individuals in a potentially discriminatory way is strictly forbidden. Our workforce is trained periodically on the ethical use of AI, including understanding AI bias and avoiding over-reliance on automation. If an employee is unsure whether a particular AI use is appropriate, they are encouraged to seek guidance from a manager or the compliance team.
No circumvention of policies: The introduction of AI does not grant exceptions to existing rules. All standard policies (data protection, confidentiality, harassment, etc.) still fully apply when using AI. For instance, using an AI tool to generate content does not excuse plagiarism – sources should still be credited appropriately. Similarly, if our policies forbid sharing certain client information externally, putting it into an AI system is equally a violation. We also caution employees to be mindful of social engineering: AI can generate very convincing phishing emails and fake messages, so cybersecurity vigilance is more important than ever.
(b) Training and support: i4T Global provides training resources to employees on how to effectively and safely use AI in their roles. This includes best practices for prompt engineering (asking AI the right questions), reviewing AI outputs, and known pitfalls to avoid.
We encourage a culture of open dialogue about AI: employees can share tips or raise concerns about AI use in our internal forums. Our IT and compliance teams stay available to support any AI-related queries or incidents.
Additionally, if an employee discovers a potential issue (like an AI tool misbehaving or a data leak risk), they are required to report it immediately so we can take corrective action.
(c) Personal use vs. business use: While these guidelines focus on business use of AI, employees should also be careful when using AI in a personal capacity if there’s any overlap with work. For example, using a personal AI account at home to brainstorm work ideas is not advised if it involves discussing confidential work topics. We recommend keeping work-related AI usage within our monitored and approved environment to protect both the company and the employee.
7. AI governance and oversight at i4T Global
(a) Governance structure: We have established an internal AI Governance Committee to oversee the implementation of this AI Policy and the responsible use of AI across the organization. The AGC is a cross-functional team including members from our executive leadership, product development, data science, legal/compliance, and IT security departments.
Its mandate is to provide guidance, review significant AI initiatives, and ensure accountability. Similar to leading organizations that set up AI oversight groups, our committee evaluates potential risks and ethical implications of AI projects before deployment.
The committee meets regularly to discuss ongoing AI uses, emerging regulations, and any incidents or concerns that have arisen. It also keeps our leadership informed of AI-related opportunities and challenges.
(b) Accountability and roles: We have designated a senior executive as the “AI Accountable Officer” for i4T Global, who chairs the AGC. This person is responsible for ensuring that the AI Policy is implemented in practice and that every department follows through on the guidelines. They serve as a point of contact for any internal or external inquiries about our AI use.
Additionally, each major AI system or feature we deploy has an assigned owner (typically a product manager or data scientist) who is responsible for its ongoing performance and compliance. These owners maintain documentation on their AI system’s purpose, design, training data, and testing results as part of our AI inventory.
(c) Risk management and review: Following best practices, we incorporate risk management throughout the AI lifecycle. This includes a pre-deployment risk assessment for new AI features (to identify potential harms such as bias, security vulnerabilities, or legal non-compliance) and implementing mitigation strategies before launch. We maintain an internal AI register of our systems, which logs key information and risk levels, and we update it whenever there are significant changes.
High-impact or higher-risk AI applications (for example, anything affecting personal safety or significant financial decisions) receive heightened scrutiny and may require formal approval from the AGC before going live. We also integrate AI risks into our overall enterprise risk management, meaning they are reviewed periodically like other business risks.
We actively monitor AI system outputs in production to catch any issues early. This might involve automated alerts for unusual patterns, as well as periodic audits for fairness and accuracy. For example, if an AI model is used in matching customers to service providers, we might periodically audit the matches to ensure no unintended bias is creeping in. We also stay up-to-date with technical and regulatory developments (e.g., improvements in AI explainability, new legal requirements) so we can adapt our systems accordingly.
(d) Continuous improvement: AI technology and its regulatory environment are evolving rapidly. We view our AI Policy and practices as living components of our governance. The AGC annually reviews this policy and updates it to align with new laws, ethical standards, or company values.
We also learn from our experiences: any incident or near-miss (say an AI error that had to be corrected) is treated as a learning opportunity to strengthen our controls. We invest in ongoing training for our technical teams on topics like AI fairness, security, and compliance. By fostering a culture of responsibility and learning, we aim to remain at the forefront of responsible AI use in our industry.
(e) Transparency and stakeholder engagement: In line with our commitment to transparency, we will communicate openly about our AI use. This AI Policy is published on our website for customers and partners to review. Major changes to this policy will be highlighted and communicated. We may also publish additional “AI system cards” or FAQs for specific high-profile AI features to help users understand how they work. Internally, we encourage employees to flag any ethical concerns or suggestions regarding AI; our whistleblower protections extend to anyone raising issues about AI usage. Externally, if clients or users have questions or concerns about an AI-powered feature, we invite them to contact us. We believe that maintaining an open dialogue with stakeholders about AI will help build trust and allow us to serve our customers better.
8. Review and updates to this AI Policy
(a) Policy updates: This AI Policy may be updated from time to time to reflect new practices, advancements in technology, or changes in regulations. We will review the policy at least once a year and also whenever there are significant changes in our AI approach or the external environment. For example, if new legislation like the EU AI Act or updated Australian guidelines come into effect, or if we launch a major new AI capability, we will update this document accordingly. The “Last Updated” date at the top will always reflect the date of the latest revision.
(b) Notice of changes: If changes to the AI Policy are substantial, we will provide notice to our users. This may include an announcement via email, an in-app notification, or an update on our website’s policy change log. We encourage users to periodically review this policy to stay informed about how i4T Global is managing AI responsibly.
(c) Historic versions: For transparency, prior versions of this AI Policy will be archived and available upon request. This allows anyone to track how our stance and measures regarding AI have evolved over time.
(d) Contact and questions: If you have any questions, feedback, or concerns regarding this AI Policy or i4T Global’s use of AI, please contact us at Hello@i4TGlobal.com or via our website contact form. Your input can help us improve and ensure our AI remains aligned with our ethical commitments.