• https://curitibaeparanaemfotosantigas.blogspot.com/2023/05/por-que-os-guardas-reais-do-reino-unido.html
    https://curitibaeparanaemfotosantigas.blogspot.com/2023/05/por-que-os-guardas-reais-do-reino-unido.html
    0 Comments 0 Shares 675 Views
  • Serious Concerns over privacy in Brazil - and on what the Brazilian government consdiers liability for UGC within it's territory:

    -- Justice of Brazil suspends telegram. --

    To clarify: Telegram failed to provide the Ministry of Justice, the full account details of anyone they thought was violating laws in the country - such as school threats. Telegram refused to release the names, information, IP addresses, and chat history of those users.

    and note, the ministry of justice, is wanting to send the same requirements to both apple and google as well - including information on WhatsApp - if an investigation shows that a suspect uses the platform in question, within the country, and the platform refuses to turn over the chats, ip addresses, account information (including phone number and address), then the platform itself can be fined PER DAY up to a million reais - until they comply. And the court reserves the right to block said platform for failure to notify and provide information.

    ^^ serious concerns on privacy within brazil - while the intentions are noble - the method in which the ministry is going about this is....very draconian and Orwellian - putting the onus on the platform to POLICE the work of individuals - and their text chats, etc and making the platform LIABLE for the actions of its users isn't really how they should be going about doing so.

    Last year (november 2022), when a student shot up two schools, telegram was also banned, but they eventually did comply with providing SOME information and were unblocked; both facebook/google at that time were also asked for information, and DID comply as well.

    But the information the ministry seeks now....is far more than both apple, google and telegram have provided before.

    ----------

    While in most jurisdictions around the world, the platform itself isn't liable for UGC (user generated content) if they have a robust enough Terms of Service, and proper moderation techniques in place, of late, for issues that cause mass terorr and copyright infringement, the platforms have been sharing some responsbility with the users.

    The reason for this is that while the platforms themselves may have what users can and cannot do listed firmly in their terms and conditions or user generated content policies (community guidelines, for example), the fact that they MODERATE (even if passivly with a report feature), they are aware of the content on the platform once moderated or reported <-- which means, the defense of the platform as 'I'm just a condiut, I don't know what content is there', is no longer viable.

    Thus EU courts, and some international courts have also stated that the line between what a platform is liable for and what they are not is blurred when moderation is put into place.

    And, yet, platforms are often required in those jurisdictions to have these moderation techniques in place. Thus, it's a catch 22 for platform liability - are they, or are they not liable - and yet, if they are required to moderate, they become partially liable, even if the terms of service include indemnity clauses, etc, due to UGC.

    This could lead to abuse by users to try to bring down a platform for posting UGC that is obscene or illegal based on the jurisdiction in question - and it could lead to acts like the Braziallian government taking actions against a platform that protects user privacy, due to the platforms liability being questioned (ergo, they have some tools to monitor for abuse, thus they cannot claim ignorance of the content).

    THe question is where doe sthe line get drawn? In the US, section 230 draws it clearly - but of late, with COPPA and other state laws requiring platforms to provide moderation tools to protect minors and other elements of the culture and to protect against bias online, the platforms themselves are having to begin to share blame w/ the users for the content in question.

    Platforms may end up having to comply (like Google and Facebook did) with turning over account information to avoid being shut down within a jurisdiction that holds the platform liable - or they may end up blocking access within those jurisdictions, thus removing the service from the user base.

    There HAS To be a happy compromise in place for platforms, and the internet as a whole needs to come together to force governments to understand that a platform that allows UGC isn't always liable for the UGC that is generated, as long as the user who is generating the UGC can be identified.

    Even blockchain cannot protect against such actions - as the blockchain still has information about who the user is (even if it's just a location where the information was posted) and still has a wallet address, or an account identifier to use. Plus, with the advent of context aware AI and patterns of human writing, it's quite possible to identify a subset of suspects based on the writing styles and habits of an individual, even without accound identifying information.

    Thus, even blockchain is not immune from a shift in this liability midset.

    Trying times indeed for services like Telegram, WhatsApp, social media platforms, etc.
    Serious Concerns over privacy in Brazil - and on what the Brazilian government consdiers liability for UGC within it's territory: -- Justice of Brazil suspends telegram. -- To clarify: Telegram failed to provide the Ministry of Justice, the full account details of anyone they thought was violating laws in the country - such as school threats. Telegram refused to release the names, information, IP addresses, and chat history of those users. and note, the ministry of justice, is wanting to send the same requirements to both apple and google as well - including information on WhatsApp - if an investigation shows that a suspect uses the platform in question, within the country, and the platform refuses to turn over the chats, ip addresses, account information (including phone number and address), then the platform itself can be fined PER DAY up to a million reais - until they comply. And the court reserves the right to block said platform for failure to notify and provide information. ^^ serious concerns on privacy within brazil - while the intentions are noble - the method in which the ministry is going about this is....very draconian and Orwellian - putting the onus on the platform to POLICE the work of individuals - and their text chats, etc and making the platform LIABLE for the actions of its users isn't really how they should be going about doing so. Last year (november 2022), when a student shot up two schools, telegram was also banned, but they eventually did comply with providing SOME information and were unblocked; both facebook/google at that time were also asked for information, and DID comply as well. But the information the ministry seeks now....is far more than both apple, google and telegram have provided before. ---------- While in most jurisdictions around the world, the platform itself isn't liable for UGC (user generated content) if they have a robust enough Terms of Service, and proper moderation techniques in place, of late, for issues that cause mass terorr and copyright infringement, the platforms have been sharing some responsbility with the users. The reason for this is that while the platforms themselves may have what users can and cannot do listed firmly in their terms and conditions or user generated content policies (community guidelines, for example), the fact that they MODERATE (even if passivly with a report feature), they are aware of the content on the platform once moderated or reported <-- which means, the defense of the platform as 'I'm just a condiut, I don't know what content is there', is no longer viable. Thus EU courts, and some international courts have also stated that the line between what a platform is liable for and what they are not is blurred when moderation is put into place. And, yet, platforms are often required in those jurisdictions to have these moderation techniques in place. Thus, it's a catch 22 for platform liability - are they, or are they not liable - and yet, if they are required to moderate, they become partially liable, even if the terms of service include indemnity clauses, etc, due to UGC. This could lead to abuse by users to try to bring down a platform for posting UGC that is obscene or illegal based on the jurisdiction in question - and it could lead to acts like the Braziallian government taking actions against a platform that protects user privacy, due to the platforms liability being questioned (ergo, they have some tools to monitor for abuse, thus they cannot claim ignorance of the content). THe question is where doe sthe line get drawn? In the US, section 230 draws it clearly - but of late, with COPPA and other state laws requiring platforms to provide moderation tools to protect minors and other elements of the culture and to protect against bias online, the platforms themselves are having to begin to share blame w/ the users for the content in question. Platforms may end up having to comply (like Google and Facebook did) with turning over account information to avoid being shut down within a jurisdiction that holds the platform liable - or they may end up blocking access within those jurisdictions, thus removing the service from the user base. There HAS To be a happy compromise in place for platforms, and the internet as a whole needs to come together to force governments to understand that a platform that allows UGC isn't always liable for the UGC that is generated, as long as the user who is generating the UGC can be identified. Even blockchain cannot protect against such actions - as the blockchain still has information about who the user is (even if it's just a location where the information was posted) and still has a wallet address, or an account identifier to use. Plus, with the advent of context aware AI and patterns of human writing, it's quite possible to identify a subset of suspects based on the writing styles and habits of an individual, even without accound identifying information. Thus, even blockchain is not immune from a shift in this liability midset. Trying times indeed for services like Telegram, WhatsApp, social media platforms, etc.
    Like
    12
    2 Comments 0 Shares 4194 Views